Boundary Cutting for Packet Classification

0
3334

Boundary Cutting for Packet Classification

ABSTRACT:

Decision-tree-based packet classification algorithms such as HiCuts, HyperCuts, and EffiCuts show excellent search performance by exploiting the geometrical representation of rules in a classifier and searching for a geometric subspace to which each input packet belongs. However, decision tree algorithms involve complicated heuristics for determining the field and number of cuts. Moreover, fixed interval-based cutting not relating to the actual space that each rule covers is ineffective and results in a huge storage requirement. A new efficient packet classification algorithm using boundary cutting is proposed in this paper. The proposed algorithm finds out the space that each rule covers and performs the cutting according to the space boundary. Hence, the cutting in the proposed algorithm is deterministic rather than involving the complicated heuristics, and it is more effective in providing improved search performance and more efficient in memory requirement. For rule sets with 1000–100 000 rules, simulation results show that the proposed boundary cutting algorithm provides a packet classification through 10–23 on-chip memory accesses and 1–4 off-chip memory accesses in average.

Java Related Project:

  1. Ranking Spatial Data by Quality Preferences

EXISTING SYSTEM:

Ø Our study analyzed various decision-tree-based packet classification algorithms. If a decision tree is properly partitioned so that the internal tree nodes are stored in an on-chip memory and a large rule database is stored in an off-chip memory, the decision tree algorithm can provide very high-speed search performance. Moreover, decision tree algorithms naturally enable both the highest-priority match and the multimatch packet classification. Earlier decision tree algorithms such as HiCuts and HyperCuts select the field and number of cuts based on a locally optimized decision, which compromises the search speed and the memory requirement. This process requires a fair amount of preprocessing, which involves complicated heuristics related to each given rule set.

 

DISADVANTAGES OF EXISTING SYSTEM:

Ø The computation required for the pre-processing consumes much memory and construction time, making it difficult.

Ø Algorithms to be extended to large rule sets because of memory problems in building the decision trees.

Ø The cutting is based on a fixed interval, which does not consider the actual space that each rule covers; hence it is ineffective.

 

PROPOSED SYSTEM:

Ø In this paper, we propose a new efficient packet classification algorithm based on boundary cutting. Cutting in the proposed algorithm is based on the disjoint space covered by each rule.

Ø Hence, the packet classification table using the proposed algorithm is deterministically built and does not require the complicated heuristics used by earlier decision tree algorithms.

ADVANTAGES OF PROPOSED SYSTEM:

Ø The boundary cutting of the proposed algorithm is more effective than that of earlier algorithms since it is based on rule boundaries rather than fixed intervals. Hence, the amount of required memory is significantly reduced.

Ø Although BC loses the indexing ability at internal nodes, the binary search at internal nodes provides good search performance

 

MODULES:

] Building a BC Decision Tree

] Searching in the Boundary Cutting

] Selective Boundary Cutting

] Data Structure

 

MODULES DESCRIPTION:

 

Building a BC Decision Tree

When the cutting of a prefix plane according to rule boundaries is performed, both the starting and the ending boundaries of each rule can be used for cutting, but cutting by either is sufficient since decision tree algorithms generally search for a subspace in which an input packet belongs and the headers of the given input are compared for entire fields to the rules belonging to the subspace (represented by a leaf node of the decision tree).

 

Searching in the Boundary Cutting

The cuts at each internal node of the BC decision tree do not have fixed intervals. Hence, at each internal node of the tree, a binary search is required to determine the proper edge to follow for a given input.

During the binary search, the pointer to the child node is remembered when the input matches the entry value or when the input is larger than the entry value.

Consider an input packet with headers (000110, 111100, 19, 23, TCP), for example; since is used at the root node, a binary search using the header of the given input is performed. The header 000110 is compared to the middle entry of the root node, which is 010000. Since the input is smaller, the search proceeds to the smaller half and compares the input to the entry 000100. Since the input is larger, the child pointer (the second edge) is remembered, and the search proceeds to a larger half. The input is compared to 001000, and it is found to be smaller, but there is no entry to proceed in a smaller half. Hence, the search follows the remembered pointer, the second edge. At the second level, by performing a binary search, the last edge is selected for the header 111100. The linear search,which is the same as that in the HiCuts or HyperCuts algorithm, is performed for rules stored in the leaf node.

 

Selective Boundary Cutting

In this module we propose a refined structure for the BC algorithm. The decision tree algorithms including the BC algorithm use binth to determine whether a subspace should become an internal node or a leaf node. In other words, if the number of rules included in a subspace is more than binth, the subspace becomes an internal node; otherwise, it becomes a leaf node. In the BC algorithm, if a subspace becomes an internal node, every starting boundary of the rules included in the subspace is used for cutting. We propose a refined structure using the binth to select or unselect the boundary of a rule at an internal node. In other words, the refined structure activates a rule boundary only when the number of rules included in a partition exceeds the binth.

 

Data Structure

There are two different ways of storing rules in decision tree algorithms. The first way separates a rule table from a decision tree. In this case, each rule is stored only once in the rule table, while each leaf node of a decision tree has pointers to the rule table for the rules included in the leaf. The number of rule pointers that each leaf must hold equals the binth. In searching for the best matching rule for a given packet or the list of all matching rules, after a leaf node in the decision tree is reached and the number of rules included in the leaf is identified, extra memory accesses are required to access the rule table. The other way involves storing rules within leaf nodes. In this case, search performance is better since extra access to the rule table is avoided, but extra memory overhead is caused due to rule replication. In our simulation in this paper, it is assumed that rules are stored in leaf nodes since the search performance is more important than the required memory.

 

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

 

Ø System                          :         Pentium IV 2.4 GHz.

Ø Hard Disk                      :         40 GB.

Ø Floppy Drive                 :         1.44 Mb.

Ø Monitor                         :         15 VGA Colour.

Ø Mouse                            :         Logitech.

Ø Ram                               :         512 Mb.

 

SOFTWARE REQUIREMENTS:

Ø Operating system           :         Windows XP/7.

Ø Coding Language :         JAVA

Ø  IDE                     :         Eclipse Keepler

565656

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments