diff --git a/ALGORITHMS.t2t b/ALGORITHMS.t2t new file mode 100644 index 0000000..8e1536e --- /dev/null +++ b/ALGORITHMS.t2t @@ -0,0 +1,6 @@ + + + +---------------------------------------- + | [Home index.html] | [CHD chd.html] | [BDZ bdz.html] | [BMZ bmz.html] | [CHM chm.html] | [BRZ brz.html] | [FCH fch.html] +---------------------------------------- diff --git a/AUTHORS b/AUTHORS new file mode 100644 index 0000000..caab95f --- /dev/null +++ b/AUTHORS @@ -0,0 +1,4 @@ +Davi de Castro Reis davi@users.sourceforge.net +Djamel Belazzougui db8192@users.sourceforge.net +Fabiano Cupertino Botelho fc_botelho@users.sourceforge.net +Nivio Ziviani nivio@dcc.ufmg.br diff --git a/BDZ.t2t b/BDZ.t2t new file mode 100755 index 0000000..b32d0ae --- /dev/null +++ b/BDZ.t2t @@ -0,0 +1,174 @@ +BDZ Algorithm + + +%!includeconf: CONFIG.t2t + +---------------------------------------- +==Introduction== + +The BDZ algorithm was designed by Fabiano C. Botelho, Djamal Belazzougui, Rasmus Pagh and Nivio Ziviani. It is a simple, efficient, near-optimal space and practical algorithm to generate a family [figs/bdz/img8.png] of PHFs and MPHFs. It is also referred to as BPZ algorithm because the work presented by Botelho, Pagh and Ziviani in [[2 #papers]]. In the Botelho's PhD. dissertation [[1 #papers]] it is also referred to as RAM algorithm because it is more suitable for key sets that can be handled in internal memory. + +The BDZ algorithm uses //r//-uniform random hypergraphs given by function values of //r// uniform random hash functions on the input key set //S// for generating PHFs and MPHFs that require //O(n)// bits to be stored. A hypergraph is the generalization of a standard undirected graph where each edge connects [figs/bdz/img12.png] vertices. This idea is not new, see e.g. [[8 #papers]], but we have proceeded differently to achieve a space usage of //O(n)// bits rather than //O(n log n)// bits. Evaluation time for all schemes considered is constant. For //r=3// we obtain a space usage of approximately //2.6n// bits for an MPHF. More compact, and even simpler, representations can be achieved for larger //m//. For example, for //m=1.23n// we can get a space usage of //1.95n// bits. + +Our best MPHF space upper bound is within a factor of //2// from the information theoretical lower bound of approximately //1.44// bits. We have shown that the BDZ algorithm is far more practical than previous methods with proven space complexity, both because of its simplicity, and because the constant factor of the space complexity is more than //6// times lower than its closest competitor, for plausible problem sizes. We verify the practicality experimentally, using slightly more space than in the mentioned theoretical bounds. + +---------------------------------------- + +==The Algorithm== + +The BDZ algorithm is a three-step algorithm that generates PHFs and MPHFs based on random //r//-partite hypergraphs. This is an approach that provides a much tighter analysis and is much more simple than the one presented in [[3 #papers]], where it was implicit how to construct similar PHFs.The fastest and most compact functions are generated when //r=3//. In this case a PHF can be stored in approximately //1.95// bits per key and an MPHF in approximately //2.62// bits per key. + +Figure 1 gives an overview of the algorithm for //r=3//, taking as input a key set [figs/bdz/img22.png] containing three English words, i.e., //S={who,band,the}//. The edge-oriented data structure proposed in [[4 #papers]] is used to represent hypergraphs, where each edge is explicitly represented as an array of //r// vertices and, for each vertex //v//, there is a list of edges that are incident on //v//. + + | [figs/bdz/img50.png] + | **Figure 1:** (a) The mapping step generates a random acyclic //3//-partite hypergraph + | with //m=6// vertices and //n=3// edges and a list [figs/bdz/img4.png] of edges obtained when we test + | whether the hypergraph is acyclic. (b) The assigning step builds an array //g// that + | maps values from //[0,5]// to //[0,3]// to uniquely assign an edge to a vertex. (c) The ranking + | step builds the data structure used to compute function //rank// in //O(1)// time. + + + +The //Mapping Step// in Figure 1(a) carries out two important tasks: + ++ It assumes that it is possible to find three uniform hash functions //h,,0,,//, //h,,1,,// and //h,,2,,//, with ranges //{0,1}//, //{2,3}// and //{4,5}//, respectively. These functions build an one-to-one mapping of the key set //S// to the edge set //E// of a random acyclic //3//-partite hypergraph //G=(V,E)//, where //|V|=m=6// and //|E|=n=3//. In [[1,2 #papers]] it is shown that it is possible to obtain such a hypergraph with probability tending to //1// as //n// tends to infinity whenever //m=cn// and //c > 1.22//. The value of that minimizes the hypergraph size (and thereby the amount of bits to represent the resulting functions) is in the range //(1.22,1.23)//. To illustrate the mapping, key "who" is mapped to edge //{h,,0,,("who"), h,,1,,("who"), h,,2,,("who")} = {1,3,5}//, key "band" is mapped to edge //{h,,0,,("band"), h,,1,,("band"), h,,2,,("band")} = {1,2,4}//, and key "the" is mapped to edge //{h,,0,,("the"), h,,1,,("the"), h,,2,,("the")} = {0,2,5}//. + ++ It tests whether the resulting random //3//-partite hypergraph contains cycles by iteratively deleting edges connecting vertices of degree 1. The deleted edges are stored in the order of deletion in a list [figs/bdz/img4.png] to be used in the assigning step. The first deleted edge in Figure 1(a) was //{1,2,4}//, the second one was //{1,3,5}// and the third one was //{0,2,5}//. If it ends with an empty graph, then the test succeeds, otherwise it fails. + + +We now show how to use the Jenkins hash functions [[7 #papers]] to implement the three hash functions //h,,i,,//, which map values from //S// to //V,,i,,//, where [figs/bdz/img52.png]. These functions are used to build a random //3//-partite hypergraph, where [figs/bdz/img53.png] and [figs/bdz/img54.png]. Let [figs/bdz/img55.png] be a Jenkins hash function for [figs/bdz/img56.png], where +//w=32 or 64// for 32-bit and 64-bit architectures, respectively. +Let //H'// be an array of 3 //w//-bit values. The Jenkins hash function +allow us to compute in parallel the three entries in //H'// +and thereby the three hash functions //h,,i,,//, as follows: + + | //H' = h'(x)// + | //h,,0,,(x) = H'[0] mod// [figs/bdz/img136.png] + | //h,,1,,(x) = H'[1] mod// [figs/bdz/img136.png] //+// [figs/bdz/img136.png] + | //h,,2,,(x) = H'[2] mod// [figs/bdz/img136.png] //+ 2//[figs/bdz/img136.png] + + +The //Assigning Step// in Figure 1(b) outputs a PHF that maps the key set //S// into the range //[0,m-1]// and is represented by an array //g// storing values from the range //[0,3]//. The array //g// allows to select one out of the //3// vertices of a given edge, which is associated with a key //k//. A vertex for a key //k// is given by either //h,,0,,(k)//, //h,,1,,(k)// or //h,,2,,(k)//. The function //h,,i,,(k)// to be used for //k// is chosen by calculating //i = (g[h,,0,,(k)] + g[h,,1,,(k)] + g[h,,2,,(k)]) mod 3//. For instance, the values 1 and 4 represent the keys "who" and "band" because //i = (g[1] + g[3] + g[5]) mod 3 = 0// and //h,,0,,("who") = 1//, and //i = (g[1] + g[2] + g[4]) mod 3 = 2// and //h,,2,,("band") = 4//, respectively. The assigning step firstly initializes //g[i]=3// to mark every vertex as unassigned and //Visited[i]= false//, [figs/bdz/img88.png]. Let //Visited// be a boolean vector of size //m// to indicate whether a vertex has been visited. Then, for each edge [figs/bdz/img90.png] from tail to head, it looks for the first vertex //u// belonging //e// not yet visited. This is a sufficient condition for success [[1,2,8 #papers]]. Let //j// be the index of //u// in //e// for //j// in the range //[0,2]//. Then, it assigns [figs/bdz/img95.png]. Whenever it passes through a vertex //u// from //e//, if //u// has not yet been visited, it sets //Visited[u] = true//. + + +If we stop the BDZ algorithm in the assigning step we obtain a PHF with range //[0,m-1]//. The PHF has the following form: //phf(x) = h,,i(x),,(x)//, where key //x// is in //S// and //i(x) = (g[h,,0,,(x)] + g[h,,1,,(x)] + g[h,,2,,(x)]) mod 3//. In this case we do not need information for ranking and can set //g[i] = 0// whenever //g[i]// is equal to //3//, where //i// is in the range //[0,m-1]//. Therefore, the range of the values stored in //g// is narrowed from //[0,3]// to //[0,2]//. By using arithmetic coding as block of values (see [[1,2 #papers]] for details), or any compression technique that allows to perform random access in constant time to an array of compressed values [[5,6,12 #papers]], we can store the resulting PHFs in //mlog 3 = cnlog 3// bits, where //c > 1.22//. For //c = 1.23//, the space requirement is //1.95n// bits. + +The //Ranking Step// in Figure 1 (c) outputs a data structure that permits to narrow the range of a PHF generated in the assigning step from //[0,m-1]// to //[0,n-1]// and thereby an MPHF is produced. The data structure allows to compute in constant time a function //rank// from //[0,m-1]// to //[0,n-1]// that counts the number of assigned positions before a given position //v// in //g//. For instance, //rank(4) = 2// because the positions //0// and //1// are assigned since //g[0]// and //g[1]// are not equal to //3//. + + +For the implementation of the ranking step we have borrowed a simple and efficient implementation from [[10 #papers]]. It requires [figs/bdz/img111.png] additional bits of space, where [figs/bdz/img112.png], and is obtained by storing explicitly the //rank// of every //k//th index in a rankTable, where [figs/bdz/img114.png]. The larger is //k// the more compact is the resulting MPHF. Therefore, the users can tradeoff space for evaluation time by setting //k// appropriately in the implementation. We only allow values for //k// that are power of two (i.e., //k=2^^b,,k,,^^// for some constant //b,,k,,// in order to replace the expensive division and modulo operations by bit-shift and bitwise "and" operations, respectively. We have used //k=256// in the experiments for generating more succinct MPHFs. We remark that it is still possible to obtain a more compact data structure by using the results presented in [[9,11 #papers]], but at the cost of a much more complex implementation. + + +We need to use an additional lookup table //T,,r,,// to guarantee the constant evaluation time of //rank(u)//. Let us illustrate how //rank(u)// is computed using both the rankTable and the lookup table //T,,r,,//. We first look up the rank of the largest precomputed index //v// lower than or equal to //u// in the rankTable, and use //T,,r,,// to count the number of assigned vertices from position //v// to //u-1//. The lookup table //T_r// allows us to count in constant time the number of assigned vertices in [figs/bdz/img122.png] bits, where [figs/bdz/img112.png]. Thus the actual evaluation time is [figs/bdz/img123.png]. For simplicity and without loss of generality we let [figs/bdz/img124.png] be a multiple of the number of bits [figs/bdz/img125.png] used to encode each entry of //g//. As the values in //g// come from the range //[0,3]//, +then [figs/bdz/img126.png] bits and we have tried [figs/bdz/img124.png] equal to //8// and //16//. We would expect that [figs/bdz/img124.png] equal to 16 should provide a faster evaluation time because we would need to carry out fewer lookups in //T,,r,,//. But, for both values the lookup table //T,,r,,// fits entirely in the CPU cache and we did not realize any significant difference in the evaluation times. Therefore we settle for the value //8//. We remark that each value of //r// requires a different lookup table //T,,r,, that can be generated a priori. + +The resulting MPHFs have the following form: //h(x) = rank(phf(x))//. Then, we cannot get rid of the raking information by replacing the values 3 by 0 in the entries of //g//. In this case each entry in the array //g// is encoded with //2// bits and we need [figs/bdz/img133.png] additional bits to compute function //rank// in constant time. Then, the total space to store the resulting functions is [figs/bdz/img134.png] bits. By using //c = 1.23// and [figs/bdz/img135.png] we have obtained MPHFs that require approximately //2.62// bits per key to be stored. + + +---------------------------------------- + +==Memory Consumption== + +Now we detail the memory consumption to generate and to store minimal perfect hash functions +using the BDZ algorithm. The structures responsible for memory consumption are in the +following: +- 3-graph: + + **first**: is a vector that stores //cn// integer numbers, each one representing + the first edge (index in the vector edges) in the list of + incident edges of each vertex. The integer numbers are 4 bytes long. Therefore, + the vector first is stored in //4cn// bytes. + + + **edges**: is a vector to represent the edges of the graph. As each edge + is compounded by three vertices, each entry stores three integer numbers + of 4 bytes that represent the vertices. As there are //n// edges, the + vector edges is stored in //12n// bytes. + + + **next**: given a vertex [figs/img139.png], we can discover the edges that + contain [figs/img139.png] following its list of incident edges, + which starts on first[[figs/img139.png]] and the next + edges are given by next[...first[[figs/img139.png]]...]. Therefore, the vectors first and next represent + the linked lists of edges of each vertex. As there are three vertices for each edge, + when an edge is iserted in the 3-graph, it must be inserted in the three linked lists + of the vertices in its composition. Therefore, there are //3n// entries of integer + numbers in the vector next, so it is stored in //4*3n = 12n// bytes. + + + **Vertices degree (vert_degree vector)**: is a vector of //cn// bytes + that represents the degree of each vertex. We can use just one byte for each + vertex because the 3-graph is sparse, once it has more vertices than edges. + Therefore, the vertices degree is represented in //cn// bytes. + +- Acyclicity test: + + **List of deleted edges obtained when we test whether the 3-graph is a forest (queue vector)**: + is a vector of //n// integer numbers containing indexes of vector edges. Therefore, it + requires //4n// bytes in internal memory. + + + **Marked edges in the acyclicity test (marked_edges vector)**: + is a bit vector of //n// bits to indicate the edges that have already been deleted during + the acyclicity test. Therefore, it requires //n/8// bytes in internal memory. + +- MPHF description + + **function //g//**: is represented by a vector of //2cn// bits. Therefore, it is + stored in //0.25cn// bytes + + **ranktable**: is a lookup table used to store some precomputed ranking information. + It has //(cn)/(2^b)// entries of 4-byte integer numbers. Therefore it is stored in + //(4cn)/(2^b)// bytes. The larger is b, the more compact is the resulting MPHFs and + the slower are the functions. So b imposes a trade-of between space and time. + + **Total**: 0.25cn + (4cn)/(2^b) bytes + + +Thus, the total memory consumption of BDZ algorithm for generating a minimal +perfect hash function (MPHF) is: //(28.125 + 5c)n + 0.25cn + (4cn)/(2^b) + O(1)// bytes. +As the value of constant //c// may be larger than or equal to 1.23 we have: + || //c// | //b// | Memory consumption to generate a MPHF (in bytes) | + | 1.23 | //7// | //34.62n + O(1)// | + | 1.23 | //8// | //34.60n + O(1)// | + + | **Table 1:** Memory consumption to generate a MPHF using the BDZ algorithm. + +Now we present the memory consumption to store the resulting function. +So we have: + || //c// | //b// | Memory consumption to store a MPHF (in bits) | + | 1.23 | //7// | //2.77n + O(1)// | + | 1.23 | //8// | //2.61n + O(1)// | + + | **Table 2:** Memory consumption to store a MPHF generated by the BDZ algorithm. +---------------------------------------- + +==Experimental Results== + +Experimental results to compare the BDZ algorithm with the other ones in the CMPH +library are presented in Botelho, Pagh and Ziviani [[1,2 #papers]]. +---------------------------------------- + +==Papers==[papers] + ++ [F. C. Botelho http://www.dcc.ufmg.br/~fbotelho]. [Near-Optimal Space Perfect Hashing Algorithms papers/thesis.pdf]. //PhD. Thesis//, //Department of Computer Science//, //Federal University of Minas Gerais//, September 2008. Supervised by [N. Ziviani http://www.dcc.ufmg.br/~nivio]. + ++ [F. C. Botelho http://www.dcc.ufmg.br/~fbotelho], [R. Pagh http://www.itu.dk/~pagh/], [N. Ziviani http://www.dcc.ufmg.br/~nivio]. [Simple and space-efficient minimal perfect hash functions papers/wads07.pdf]. //In Proceedings of the 10th International Workshop on Algorithms and Data Structures (WADs'07),// Springer-Verlag Lecture Notes in Computer Science, vol. 4619, Halifax, Canada, August 2007, 139-150. + ++ B. Chazelle, J. Kilian, R. Rubinfeld, and A. Tal. The bloomier filter: An efficient data structure for static support lookup tables. //In Proceedings of the 15th annual ACM-SIAM symposium on Discrete algorithms (SODA'04)//, pages 30–39, Philadelphia, PA, USA, 2004. Society for Industrial and Applied Mathematics. + ++ J. Ebert. A versatile data structure for edges oriented graph algorithms. //Communication of The ACM//, (30):513–519, 1987. + ++ K. Fredriksson and F. Nikitin. Simple compression code supporting random access and fast string matching. //In Proceedings of the 6th International Workshop on Efficient and Experimental Algorithms (WEA’07)//, pages 203–216, 2007. + ++ R. Gonzalez and G. Navarro. Statistical encoding of succinct data structures. //In Proceedings of the 19th Annual Symposium on Combinatorial Pattern Matching (CPM’06)//, pages 294–305, 2006. + ++ B. Jenkins. Algorithm alley: Hash functions. //Dr. Dobb's Journal of Software Tools//, 22(9), september 1997. Extended version available at [http://burtleburtle.net/bob/hash/doobs.html http://burtleburtle.net/bob/hash/doobs.html]. + ++ B.S. Majewski, N.C. Wormald, G. Havas, and Z.J. Czech. A family of perfect hashing methods. //The Computer Journal//, 39(6):547–554, 1996. + ++ D. Okanohara and K. Sadakane. Practical entropy-compressed rank/select dictionary. //In Proceedings of the Workshop on Algorithm Engineering and Experiments (ALENEX’07)//, 2007. + ++ [R. Pagh http://www.itu.dk/~pagh/]. Low redundancy in static dictionaries with constant query time. //SIAM Journal on Computing//, 31(2):353–363, 2001. + ++ R. Raman, V. Raman, and S. S. Rao. Succinct indexable dictionaries with applications to encoding k-ary trees and multisets. //In Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms (SODA’02)//, pages 233–242, Philadelphia PA, USA, 2002. Society for Industrial and Applied Mathematics. + ++ K. Sadakane and R. Grossi. Squeezing succinct data structures into entropy bounds. //In Proceedings of the 17th annual ACM-SIAM symposium on Discrete algorithms (SODA’06)//, pages 1230–1239, 2006. + + +%!include: ALGORITHMS.t2t + +%!include: FOOTER.t2t + +%!include(html): ''GOOGLEANALYTICS.t2t'' \ No newline at end of file diff --git a/BMZ.t2t b/BMZ.t2t new file mode 100644 index 0000000..8d0460f --- /dev/null +++ b/BMZ.t2t @@ -0,0 +1,405 @@ +BMZ Algorithm + + +%!includeconf: CONFIG.t2t + +---------------------------------------- +==History== + +At the end of 2003, professor [Nivio Ziviani http://www.dcc.ufmg.br/~nivio] was +finishing the second edition of his [book http://www.dcc.ufmg.br/algoritmos/]. +During the [book http://www.dcc.ufmg.br/algoritmos/] writing, +professor [Nivio Ziviani http://www.dcc.ufmg.br/~nivio] studied the problem of generating +[minimal perfect hash functions concepts.html] +(if you are not familiarized with this problem, see [[1 #papers]][[2 #papers]]). +Professor [Nivio Ziviani http://www.dcc.ufmg.br/~nivio] coded a modified version of +the [CHM algorithm chm.html], which was proposed by +Czech, Havas and Majewski, and put it in his [book http://www.dcc.ufmg.br/algoritmos/]. +The [CHM algorithm chm.html] is based on acyclic random graphs to generate +[order preserving minimal perfect hash functions concepts.html] in linear time. +Professor [Nivio Ziviani http://www.dcc.ufmg.br/~nivio] +argued himself, why must the random graph +be acyclic? In the modified version availalbe in his [book http://www.dcc.ufmg.br/algoritmos/] he got rid of this restriction. + +The modification presented a problem, it was impossible to generate minimal perfect hash functions +for sets with more than 1000 keys. +At the same time, [Fabiano C. Botelho http://www.dcc.ufmg.br/~fbotelho], +a master degree student at [Departament of Computer Science http://www.dcc.ufmg.br] in +[Federal University of Minas Gerais http://www.ufmg.br], +started to be advised by [Nivio Ziviani http://www.dcc.ufmg.br/~nivio] who presented the problem +to [Fabiano http://www.dcc.ufmg.br/~fbotelho]. + +During the master, [Fabiano http://www.dcc.ufmg.br/~fbotelho] and +[Nivio Ziviani http://www.dcc.ufmg.br/~nivio] faced lots of problems. +In april of 2004, [Fabiano http://www.dcc.ufmg.br/~fbotelho] was talking with a +friend of him (David Menoti) about the problems +and many ideas appeared. +The ideas were implemented and a very fast algorithm to generate +minimal perfect hash functions had been designed. +We refer the algorithm to as **BMZ**, because it was conceived by Fabiano C. **B**otelho, +David **M**enoti and Nivio **Z**iviani. The algorithm is described in [[1 #papers]]. +To analyse BMZ algorithm we needed some results from the random graph theory, so +we invited professor [Yoshiharu Kohayakawa http://www.ime.usp.br/~yoshi] to help us. +The final description and analysis of BMZ algorithm is presented in [[2 #papers]]. + +---------------------------------------- + +==The Algorithm== + +The BMZ algorithm shares several features with the [CHM algorithm chm.html]. +In particular, BMZ algorithm is also +based on the generation of random graphs [figs/img27.png], where [figs/img28.png] is in +one-to-one correspondence with the key set [figs/img20.png] for which we wish to +generate a [minimal perfect hash function concepts.html]. +The two main differences between BMZ algorithm and CHM algorithm +are as follows: (//i//) BMZ algorithm generates random +graphs [figs/img27.png] with [figs/img29.png] and [figs/img30.png], where [figs/img31.png], +and hence [figs/img32.png] necessarily contains cycles, +while CHM algorithm generates //acyclic// random +graphs [figs/img27.png] with [figs/img29.png] and [figs/img30.png], +with a greater number of vertices: [figs/img33.png]; +(//ii//) CHM algorithm generates [order preserving minimal perfect hash functions concepts.html] +while BMZ algorithm does not preserve order. Thus, BMZ algorithm improves +the space requirement at the expense of generating functions that are not +order preserving. + +Suppose [figs/img14.png] is a universe of //keys//. +Let [figs/img17.png] be a set of [figs/img8.png] keys from [figs/img14.png]. +Let us show how the BMZ algorithm constructs a minimal perfect hash function [figs/img7.png]. +We make use of two auxiliary random functions [figs/img41.png] and [figs/img55.png], +where [figs/img56.png] for some suitably chosen integer [figs/img57.png], +where [figs/img58.png].We build a random graph [figs/img59.png] on [figs/img60.png], +whose edge set is [figs/img61.png]. There is an edge in [figs/img32.png] for each +key in the set of keys [figs/img20.png]. + +In what follows, we shall be interested in the //2-core// of +the random graph [figs/img32.png], that is, the maximal subgraph +of [figs/img32.png] with minimal degree at +least 2 (see [[2 #papers]] for details). +Because of its importance in our context, we call the 2-core the +//critical// subgraph of [figs/img32.png] and denote it by [figs/img63.png]. +The vertices and edges in [figs/img63.png] are said to be //critical//. +We let [figs/img64.png] and [figs/img65.png]. +Moreover, we let [figs/img66.png] be the set of //non-critical// +vertices in [figs/img32.png]. +We also let [figs/img67.png] be the set of all critical +vertices that have at least one non-critical vertex as a neighbour. +Let [figs/img68.png] be the set of //non-critical// edges in [figs/img32.png]. +Finally, we let [figs/img69.png] be the //non-critical// subgraph +of [figs/img32.png]. +The non-critical subgraph [figs/img70.png] corresponds to the //acyclic part// +of [figs/img32.png]. +We have [figs/img71.png]. + +We then construct a suitable labelling [figs/img72.png] of the vertices +of [figs/img32.png]: we choose [figs/img73.png] for each [figs/img74.png] in such +a way that [figs/img75.png] ([figs/img18.png]) is a +minimal perfect hash function for [figs/img20.png]. +This labelling [figs/img37.png] can be found in linear time +if the number of edges in [figs/img63.png] is at most [figs/img76.png] (see [[2 #papers]] +for details). + +Figure 1 presents a pseudo code for the BMZ algorithm. +The procedure BMZ ([figs/img20.png], [figs/img37.png]) receives as input the set of +keys [figs/img20.png] and produces the labelling [figs/img37.png]. +The method uses a mapping, ordering and searching approach. +We now describe each step. + | procedure BMZ ([figs/img20.png], [figs/img37.png]) + |     Mapping ([figs/img20.png], [figs/img32.png]); + |     Ordering ([figs/img32.png], [figs/img63.png], [figs/img70.png]); + |     Searching ([figs/img32.png], [figs/img63.png], [figs/img70.png], [figs/img37.png]); + | **Figure 1**: Main steps of BMZ algorithm for constructing a minimal perfect hash function + +---------------------------------------- + +===Mapping Step=== + +The procedure Mapping ([figs/img20.png], [figs/img32.png]) receives as input the set +of keys [figs/img20.png] and generates the random graph [figs/img59.png], by generating +two auxiliary functions [figs/img41.png], [figs/img78.png]. + +The functions [figs/img41.png] and [figs/img42.png] are constructed as follows. +We impose some upper bound [figs/img79.png] on the lengths of the keys in [figs/img20.png]. +To define [figs/img80.png] ([figs/img81.png], [figs/img62.png]), we generate +an [figs/img82.png] table of random integers [figs/img83.png]. +For a key [figs/img18.png] of length [figs/img84.png] and [figs/img85.png], we let + + | [figs/img86.png] + +The random graph [figs/img59.png] has vertex set [figs/img56.png] and +edge set [figs/img61.png]. We need [figs/img32.png] to be +simple, i.e., [figs/img32.png] should have neither loops nor multiple edges. +A loop occurs when [figs/img87.png] for some [figs/img18.png]. +We solve this in an ad hoc manner: we simply let [figs/img88.png] in this case. +If we still find a loop after this, we generate another pair [figs/img89.png]. +When a multiple edge occurs we abort and generate a new pair [figs/img89.png]. +Although the function above causes [collisions concepts.html] with probability //1/t//, +in [cmph library index.html] we use faster hash +functions ([DJB2 hash http://www.cs.yorku.ca/~oz/hash.html], [FNV hash http://www.isthe.com/chongo/tech/comp/fnv/], + [Jenkins hash http://burtleburtle.net/bob/hash/doobs.html] and [SDBM hash http://www.cs.yorku.ca/~oz/hash.html]) + in which we do not need to impose any upper bound [figs/img79.png] on the lengths of the keys in [figs/img20.png]. + +As mentioned before, for us to find the labelling [figs/img72.png] of the +vertices of [figs/img59.png] in linear time, +we require that [figs/img108.png]. +The crucial step now is to determine the value +of [figs/img1.png] (in [figs/img57.png]) to obtain a random +graph [figs/img71.png] with [figs/img109.png]. +Botelho, Menoti an Ziviani determinded emprically in [[1 #papers]] that +the value of [figs/img1.png] is //1.15//. This value is remarkably +close to the theoretical value determined in [[2 #papers]], +which is around [figs/img112.png]. + +---------------------------------------- + +===Ordering Step=== + +The procedure Ordering ([figs/img32.png], [figs/img63.png], [figs/img70.png]) receives +as input the graph [figs/img32.png] and partitions [figs/img32.png] into the two +subgraphs [figs/img63.png] and [figs/img70.png], so that [figs/img71.png]. + +Figure 2 presents a sample graph with 9 vertices +and 8 edges, where the degree of a vertex is shown besides each vertex. +Initially, all vertices with degree 1 are added to a queue [figs/img136.png]. +For the example shown in Figure 2(a), [figs/img137.png] after the initialization step. + + | [figs/img138.png] + | **Figure 2:** Ordering step for a graph with 9 vertices and 8 edges. + +Next, we remove one vertex [figs/img139.png] from the queue, decrement its degree and +the degree of the vertices with degree greater than 0 in the adjacent +list of [figs/img139.png], as depicted in Figure 2(b) for [figs/img140.png]. +At this point, the adjacencies of [figs/img139.png] with degree 1 are +inserted into the queue, such as vertex 1. +This process is repeated until the queue becomes empty. +All vertices with degree 0 are non-critical vertices and the others are +critical vertices, as depicted in Figure 2(c). +Finally, to determine the vertices in [figs/img141.png] we collect all +vertices [figs/img142.png] with at least one vertex [figs/img143.png] that +is in Adj[figs/img144.png] and in [figs/img145.png], as the vertex 8 in Figure 2(c). + +---------------------------------------- + +===Searching Step=== + +In the searching step, the key part is +the //perfect assignment problem//: find [figs/img153.png] such that +the function [figs/img154.png] defined by + + | [figs/img155.png] + +is a bijection from [figs/img156.png] to [figs/img157.png] (recall [figs/img158.png]). +We are interested in a labelling [figs/img72.png] of +the vertices of the graph [figs/img59.png] with +the property that if [figs/img11.png] and [figs/img22.png] are keys +in [figs/img20.png], then [figs/img159.png]; that is, if we associate +to each edge the sum of the labels on its endpoints, then these values +should be all distinct. +Moreover, we require that all the sums [figs/img160.png] ([figs/img18.png]) +fall between [figs/img115.png] and [figs/img161.png], and thus we have a bijection +between [figs/img20.png] and [figs/img157.png]. + +The procedure Searching ([figs/img32.png], [figs/img63.png], [figs/img70.png], [figs/img37.png]) +receives as input [figs/img32.png], [figs/img63.png], [figs/img70.png] and finds a +suitable [figs/img162.png] bit value for each vertex [figs/img74.png], stored in the +array [figs/img37.png]. +This step is first performed for the vertices in the +critical subgraph [figs/img63.png] of [figs/img32.png] (the 2-core of [figs/img32.png]) +and then it is performed for the vertices in [figs/img70.png] (the non-critical subgraph +of [figs/img32.png] that contains the "acyclic part" of [figs/img32.png]). +The reason the assignment of the [figs/img37.png] values is first +performed on the vertices in [figs/img63.png] is to resolve reassignments +as early as possible (such reassignments are consequences of the cycles +in [figs/img63.png] and are depicted hereinafter). + +---------------------------------------- + +====Assignment of Values to Critical Vertices==== + +The labels [figs/img73.png] ([figs/img142.png]) +are assigned in increasing order following a greedy +strategy where the critical vertices [figs/img139.png] are considered one at a time, +according to a breadth-first search on [figs/img63.png]. +If a candidate value [figs/img11.png] for [figs/img73.png] is forbidden +because setting [figs/img163.png] would create two edges with the same sum, +we try [figs/img164.png] for [figs/img73.png]. This fact is referred to +as a //reassignment//. + +Let [figs/img165.png] be the set of addresses assigned to edges in [figs/img166.png]. +Initially [figs/img167.png]. +Let [figs/img11.png] be a candidate value for [figs/img73.png]. +Initially [figs/img168.png]. +Considering the subgraph [figs/img63.png] in Figure 2(c), +a step by step example of the assignment of values to vertices in [figs/img63.png] is +presented in Figure 3. +Initially, a vertex [figs/img139.png] is chosen, the assignment [figs/img163.png] is made +and [figs/img11.png] is set to [figs/img164.png]. +For example, suppose that vertex [figs/img169.png] in Figure 3(a) is +chosen, the assignment [figs/img170.png] is made and [figs/img11.png] is set to [figs/img96.png]. + + | [figs/img171.png] + | **Figure 3:** Example of the assignment of values to critical vertices. + +In Figure 3(b), following the adjacent list of vertex [figs/img169.png], +the unassigned vertex [figs/img115.png] is reached. +At this point, we collect in the temporary variable [figs/img172.png] all adjacencies +of vertex [figs/img115.png] that have been assigned an [figs/img11.png] value, +and [figs/img173.png]. +Next, for all [figs/img174.png], we check if [figs/img175.png]. +Since [figs/img176.png], then [figs/img177.png] is set +to [figs/img96.png], [figs/img11.png] is incremented +by 1 (now [figs/img178.png]) and [figs/img179.png]. +Next, vertex [figs/img180.png] is reached, [figs/img181.png] is set +to [figs/img62.png], [figs/img11.png] is set to [figs/img180.png] and [figs/img182.png]. +Next, vertex [figs/img183.png] is reached and [figs/img184.png]. +Since [figs/img185.png] and [figs/img186.png], then [figs/img187.png] is +set to [figs/img180.png], [figs/img11.png] is set to [figs/img183.png] and [figs/img188.png]. +Finally, vertex [figs/img189.png] is reached and [figs/img190.png]. +Since [figs/img191.png], [figs/img11.png] is incremented by 1 and set to 5, as depicted in +Figure 3(c). +Since [figs/img192.png], [figs/img11.png] is again incremented by 1 and set to 6, +as depicted in Figure 3(d). +These two reassignments are indicated by the arrows in Figure 3. +Since [figs/img193.png] and [figs/img194.png], then [figs/img195.png] is set +to [figs/img196.png] and [figs/img197.png]. This finishes the algorithm. + +---------------------------------------- + +====Assignment of Values to Non-Critical Vertices==== + +As [figs/img70.png] is acyclic, we can impose the order in which addresses are +associated with edges in [figs/img70.png], making this step simple to solve +by a standard depth first search algorithm. +Therefore, in the assignment of values to vertices in [figs/img70.png] we +benefit from the unused addresses in the gaps left by the assignment of values +to vertices in [figs/img63.png]. +For that, we start the depth-first search from the vertices in [figs/img141.png] because +the [figs/img37.png] values for these critical vertices were already assigned +and cannot be changed. + +Considering the subgraph [figs/img70.png] in Figure 2(c), +a step by step example of the assignment of values to vertices in [figs/img70.png] is +presented in Figure 4. +Figure 4(a) presents the initial state of the algorithm. +The critical vertex 8 is the only one that has non-critical vertices as +adjacent. +In the example presented in Figure 3, the addresses [figs/img198.png] were not used. +So, taking the first unused address [figs/img115.png] and the vertex [figs/img96.png], +which is reached from the vertex [figs/img169.png], [figs/img199.png] is set +to [figs/img200.png], as shown in Figure 4(b). +The only vertex that is reached from vertex [figs/img96.png] is vertex [figs/img62.png], so +taking the unused address [figs/img183.png] we set [figs/img201.png] to [figs/img202.png], +as shown in Figure 4(c). +This process is repeated until the UnAssignedAddresses list becomes empty. + + | [figs/img203.png] + | **Figure 4:** Example of the assignment of values to non-critical vertices. + +---------------------------------------- + +==The Heuristic==[heuristic] + +We now present an heuristic for BMZ algorithm that +reduces the value of [figs/img1.png] to any given value between //1.15// and //0.93//. +This reduces the space requirement to store the resulting function +to any given value between [figs/img12.png] words and [figs/img13.png] words. +The heuristic reuses, when possible, the set +of [figs/img11.png] values that caused reassignments, just before +trying [figs/img164.png]. +Decreasing the value of [figs/img1.png] leads to an increase in the number of +iterations to generate [figs/img32.png]. +For example, for [figs/img244.png] and [figs/img6.png], the analytical expected number +of iterations are [figs/img245.png] and [figs/img246.png], respectively (see [[2 #papers]] +for details), +while for [figs/img128.png] the same value is around //2.13//. + +---------------------------------------- + +==Memory Consumption== + +Now we detail the memory consumption to generate and to store minimal perfect hash functions +using the BMZ algorithm. The structures responsible for memory consumption are in the +following: +- Graph: + + **first**: is a vector that stores //cn// integer numbers, each one representing + the first edge (index in the vector edges) in the list of + edges of each vertex. + The integer numbers are 4 bytes long. Therefore, + the vector first is stored in //4cn// bytes. + + + **edges**: is a vector to represent the edges of the graph. As each edge + is compounded by a pair of vertices, each entry stores two integer numbers + of 4 bytes that represent the vertices. As there are //n// edges, the + vector edges is stored in //8n// bytes. + + + **next**: given a vertex [figs/img139.png], we can discover the edges that + contain [figs/img139.png] following its list of edges, + which starts on first[[figs/img139.png]] and the next + edges are given by next[...first[[figs/img139.png]]...]. Therefore, the vectors first and next represent + the linked lists of edges of each vertex. As there are two vertices for each edge, + when an edge is iserted in the graph, it must be inserted in the two linked lists + of the vertices in its composition. Therefore, there are //2n// entries of integer + numbers in the vector next, so it is stored in //4*2n = 8n// bytes. + + + **critical vertices(critical_nodes vector)**: is a vector of //cn// bits, + where each bit indicates if a vertex is critical (1) or non-critical (0). + Therefore, the critical and non-critical vertices are represented in //cn/8// bytes. + + + **critical edges (used_edges vector)**: is a vector of //n// bits, where each + bit indicates if an edge is critical (1) or non-critical (0). Therefore, the + critical and non-critical edges are represented in //n/8// bytes. + +- Other auxiliary structures + + **queue**: is a queue of integer numbers used in the breadth-first search of the + assignment of values to critical vertices. There is an entry in the queue for + each two critical vertices. Let [figs/img110.png] be the expected number of critical + vertices. Therefore, the queue is stored in //4*0.5*[figs/img110.png]=2[figs/img110.png]//. + + + **visited**: is a vector of //cn// bits, where each bit indicates if the g value of + a given vertex was already defined. Therefore, the vector visited is stored + in //cn/8// bytes. + + + **function //g//**: is represented by a vector of //cn// integer numbers. + As each integer number is 4 bytes long, the function //g// is stored in + //4cn// bytes. + + +Thus, the total memory consumption of BMZ algorithm for generating a minimal +perfect hash function (MPHF) is: //(8.25c + 16.125)n +2[figs/img110.png] + O(1)// bytes. +As the value of constant //c// may be 1.15 and 0.93 we have: + || //c// | [figs/img110.png] | Memory consumption to generate a MPHF | + | 0.93 | //0.497n// | //24.80n + O(1)// | + | 1.15 | //0.401n// | //26.42n + O(1)// | + + | **Table 1:** Memory consumption to generate a MPHF using the BMZ algorithm. + +The values of [figs/img110.png] were calculated using Eq.(1) presented in [[2 #papers]]. + +Now we present the memory consumption to store the resulting function. +We only need to store the //g// function. Thus, we need //4cn// bytes. +Again we have: + || //c// | Memory consumption to store a MPHF | + | 0.93 | //3.72n// | + | 1.15 | //4.60n// | + + | **Table 2:** Memory consumption to store a MPHF generated by the BMZ algorithm. +---------------------------------------- + +==Experimental Results== + +[CHM x BMZ comparison.html] + +---------------------------------------- + +==Papers==[papers] + ++ [F. C. Botelho http://www.dcc.ufmg.br/~fbotelho], D. Menoti, [N. Ziviani http://www.dcc.ufmg.br/~nivio]. [A New algorithm for constructing minimal perfect hash functions papers/bmz_tr004_04.ps], Technical Report TR004/04, Department of Computer Science, Federal University of Minas Gerais, 2004. + ++ [F. C. Botelho http://www.dcc.ufmg.br/~fbotelho], Y. Kohayakawa, and [N. Ziviani http://www.dcc.ufmg.br/~nivio]. [A Practical Minimal Perfect Hashing Method papers/wea05.pdf]. //4th International Workshop on efficient and Experimental Algorithms (WEA05),// Springer-Verlag Lecture Notes in Computer Science, vol. 3505, Santorini Island, Greece, May 2005, 488-500. + + +%!include: ALGORITHMS.t2t + +%!include: FOOTER.t2t + +%!include(html): ''GOOGLEANALYTICS.t2t'' \ No newline at end of file diff --git a/BRZ.t2t b/BRZ.t2t new file mode 100644 index 0000000..59c032f --- /dev/null +++ b/BRZ.t2t @@ -0,0 +1,440 @@ +External Memory Based Algorithm + + +%!includeconf: CONFIG.t2t + +---------------------------------------- +==Introduction== + +Until now, because of the limitations of current algorithms, +the use of MPHFs is restricted to scenarios where the set of keys being hashed is +relatively small. +However, in many cases it is crucial to deal in an efficient way with very large +sets of keys. +Due to the exponential growth of the Web, the work with huge collections is becoming +a daily task. +For instance, the simple assignment of number identifiers to web pages of a collection +can be a challenging task. +While traditional databases simply cannot handle more traffic once the working +set of URLs does not fit in main memory anymore[[4 #papers]], the algorithm we propose here to +construct MPHFs can easily scale to billions of entries. + +As there are many applications for MPHFs, it is +important to design and implement space and time efficient algorithms for +constructing such functions. +The attractiveness of using MPHFs depends on the following issues: + ++ The amount of CPU time required by the algorithms for constructing MPHFs. + ++ The space requirements of the algorithms for constructing MPHFs. + ++ The amount of CPU time required by a MPHF for each retrieval. + ++ The space requirements of the description of the resulting MPHFs to be used at retrieval time. + + +We present here a novel external memory based algorithm for constructing MPHFs that +are very efficient in the four requirements mentioned previously. +First, the algorithm is linear on the size of keys to construct a MPHF, +which is optimal. +For instance, for a collection of 1 billion URLs +collected from the web, each one 64 characters long on average, the time to construct a +MPHF using a 2.4 gigahertz PC with 500 megabytes of available main memory +is approximately 3 hours. +Second, the algorithm needs a small a priori defined vector of [figs/brz/img23.png] one +byte entries in main memory to construct a MPHF. +For the collection of 1 billion URLs and using [figs/brz/img4.png], the algorithm needs only +5.45 megabytes of internal memory. +Third, the evaluation of the MPHF for each retrieval requires three memory accesses and +the computation of three universal hash functions. +This is not optimal as any MPHF requires at least one memory access and the computation +of two universal hash functions. +Fourth, the description of a MPHF takes a constant number of bits for each key, which is optimal. +For the collection of 1 billion URLs, it needs 8.1 bits for each key, +while the theoretical lower bound is [figs/brz/img24.png] bits per key. + +---------------------------------------- + + +==The Algorithm== + +The main idea supporting our algorithm is the classical divide and conquer technique. +The algorithm is a two-step external memory based algorithm +that generates a MPHF //h// for a set //S// of //n// keys. +Figure 1 illustrates the two steps of the +algorithm: the partitioning step and the searching step. + + | [figs/brz/brz.png] + | **Figure 1:** Main steps of our algorithm. + +The partitioning step takes a key set //S// and uses a universal hash +function [figs/brz/img42.png] proposed by Jenkins[[5 #papers]] +to transform each key [figs/brz/img43.png] into an integer [figs/brz/img44.png]. +Reducing [figs/brz/img44.png] modulo [figs/brz/img23.png], we partition //S// +into [figs/brz/img23.png] buckets containing at most 256 keys in each bucket (with high +probability). + +The searching step generates a MPHF[figs/brz/img46.png] for each bucket //i//, [figs/brz/img47.png]. +The resulting MPHF //h(k)//, [figs/brz/img43.png], is given by + + | [figs/brz/img49.png] + +where [figs/brz/img50.png]. +The //i//th entry //offset[i]// of the displacement vector +//offset//, [figs/brz/img47.png], contains the total number +of keys in the buckets from 0 to //i-1//, that is, it gives the interval of the +keys in the hash table addressed by the MPHF[figs/brz/img46.png]. In the following we explain +each step in detail. + +---------------------------------------- + +=== Partitioning step === + +The set //S// of //n// keys is partitioned into [figs/brz/img23.png], +where //b// is a suitable parameter chosen to guarantee +that each bucket has at most 256 keys with high probability +(see [[2 #papers]] for details). +The partitioning step works as follows: + + | [figs/brz/img54.png] + | **Figure 2:** Partitioning step. + +Statement 1.1 of the **for** loop presented in Figure 2 +reads sequentially all the keys of block [figs/brz/img55.png] from disk into an internal area +of size [figs/brz/img8.png]. + +Statement 1.2 performs an indirect bucket sort of the keys in block [figs/brz/img55.png] and +at the same time updates the entries in the vector //size//. +Let us briefly describe how [figs/brz/img55.png] is partitioned among +the [figs/brz/img23.png] buckets. +We use a local array of [figs/brz/img23.png] counters to store a +count of how many keys from [figs/brz/img55.png] belong to each bucket. +The pointers to the keys in each bucket //i//, [figs/brz/img47.png], +are stored in contiguous positions in an array. +For this we first reserve the required number of entries +in this array of pointers using the information from the array of counters. +Next, we place the pointers to the keys in each bucket into the respective +reserved areas in the array (i.e., we place the pointers to the keys in bucket 0, +followed by the pointers to the keys in bucket 1, and so on). + +To find the bucket address of a given key +we use the universal hash function [figs/brz/img44.png][[5 #papers]]. +Key //k// goes into bucket //i//, where + + | [figs/brz/img57.png] (1) + +Figure 3(a) shows a //logical// view of the [figs/brz/img23.png] buckets +generated in the partitioning step. +In reality, the keys belonging to each bucket are distributed among many files, +as depicted in Figure 3(b). +In the example of Figure 3(b), the keys in bucket 0 +appear in files 1 and //N//, the keys in bucket 1 appear in files 1, 2 +and //N//, and so on. + + | [figs/brz/brz-partitioning.png] + | **Figure 3:** Situation of the buckets at the end of the partitioning step: (a) Logical view (b) Physical view. + +This scattering of the keys in the buckets could generate a performance +problem because of the potential number of seeks +needed to read the keys in each bucket from the //N// files in disk +during the searching step. +But, as we show in [[2 #papers]], the number of seeks +can be kept small using buffering techniques. +Considering that only the vector //size//, which has [figs/brz/img23.png] one-byte +entries (remember that each bucket has at most 256 keys), +must be maintained in main memory during the searching step, +almost all main memory is available to be used as disk I/O buffer. + +The last step is to compute the //offset// vector and dump it to the disk. +We use the vector //size// to compute the +//offset// displacement vector. +The //offset[i]// entry contains the number of keys +in the buckets //0, 1, ..., i-1//. +As //size[i]// stores the number of keys +in bucket //i//, where [figs/brz/img47.png], we have + + | [figs/brz/img63.png] + +---------------------------------------- + +=== Searching step === + +The searching step is responsible for generating a MPHF for each +bucket. Figure 4 presents the searching step algorithm. + + | [figs/brz/img64.png] + | **Figure 4:** Searching step. + +Statement 1 of Figure 4 inserts one key from each file +in a minimum heap //H// of size //N//. +The order relation in //H// is given by the bucket address //i// given by +Eq. (1). + +Statement 2 has two important steps. +In statement 2.1, a bucket is read from disk, +as described below. +In statement 2.2, a MPHF is generated for each bucket //i//, as described +in the following. +The description of MPHF[figs/brz/img46.png] is a vector [figs/brz/img66.png] of 8-bit integers. +Finally, statement 2.3 writes the description [figs/brz/img66.png] of MPHF[figs/brz/img46.png] to disk. + +---------------------------------------- + +==== Reading a bucket from disk ==== + +In this section we present the refinement of statement 2.1 of +Figure 4. +The algorithm to read bucket //i// from disk is presented +in Figure 5. + + | [figs/brz/img67.png] + | **Figure 5:** Reading a bucket. + +Bucket //i// is distributed among many files and the heap //H// is used to drive a +multiway merge operation. +In Figure 5, statement 1.1 extracts and removes triple +//(i, j, k)// from //H//, where //i// is a minimum value in //H//. +Statement 1.2 inserts key //k// in bucket //i//. +Notice that the //k// in the triple //(i, j, k)// is in fact a pointer to +the first byte of the key that is kept in contiguous positions of an array of characters +(this array containing the keys is initialized during the heap construction +in statement 1 of Figure 4). +Statement 1.3 performs a seek operation in File //j// on disk for the first +read operation and reads sequentially all keys //k// that have the same //i// +and inserts them all in bucket //i//. +Finally, statement 1.4 inserts in //H// the triple //(i, j, x)//, +where //x// is the first key read from File //j// (in statement 1.3) +that does not have the same bucket address as the previous keys. + +The number of seek operations on disk performed in statement 1.3 is discussed +in [[2, Section 5.1 #papers]], +where we present a buffering technique that brings down +the time spent with seeks. + +---------------------------------------- + +==== Generating a MPHF for each bucket ==== + +To the best of our knowledge the [BMZ algorithm bmz.html] we have designed in +our previous works [[1,3 #papers]] is the fastest published algorithm for +constructing MPHFs. +That is why we are using that algorithm as a building block for the +algorithm presented here. In reality, we are using +an optimized version of BMZ (BMZ8) for small set of keys (at most 256 keys). +[Click here to see details about BMZ algorithm bmz.html]. + +---------------------------------------- + +==Analysis of the Algorithm== + +Analytical results and the complete analysis of the external memory based algorithm +can be found in [[2 #papers]]. + +---------------------------------------- + +==Experimental Results== + +In this section we present the experimental results. +We start presenting the experimental setup. +We then present experimental results for +the internal memory based algorithm ([the BMZ algorithm bmz.html]) +and for our external memory based algorithm. +Finally, we discuss how the amount of internal memory available +affects the runtime of the external memory based algorithm. + +---------------------------------------- + +===The data and the experimental setup=== + +All experiments were carried out on +a computer running the Linux operating system, version 2.6, +with a 2.4 gigahertz processor and +1 gigabyte of main memory. +In the experiments related to the new +algorithm we limited the main memory in 500 megabytes. + +Our data consists of a collection of 1 billion +URLs collected from the Web, each URL 64 characters long on average. +The collection is stored on disk in 60.5 gigabytes. + +---------------------------------------- + +===Performance of the BMZ Algorithm=== + +[The BMZ algorithm bmz.html] is used for constructing a MPHF for each bucket. +It is a randomized algorithm because it needs to generate a simple random graph +in its first step. +Once the graph is obtained the other two steps are deterministic. + +Thus, we can consider the runtime of the algorithm to have +the form [figs/brz/img159.png] for an input of //n// keys, +where [figs/brz/img160.png] is some machine dependent +constant that further depends on the length of the keys and //Z// is a random +variable with geometric distribution with mean [figs/brz/img162.png]. All results +in our experiments were obtained taking //c=1//; the value of //c//, with //c// in //[0.93,1.15]//, +in fact has little influence in the runtime, as shown in [[3 #papers]]. + +The values chosen for //n// were 1, 2, 4, 8, 16 and 32 million. +Although we have a dataset with 1 billion URLs, on a PC with +1 gigabyte of main memory, the algorithm is able +to handle an input with at most 32 million keys. +This is mainly because of the graph we need to keep in main memory. +The algorithm requires //25n + O(1)// bytes for constructing +a MPHF ([click here to get details about the data structures used by the BMZ algorithm bmz.html]). + +In order to estimate the number of trials for each value of //n// we use +a statistical method for determining a suitable sample size (see, e.g., [[6, Chapter 13 #papers]]). +As we obtained different values for each //n//, +we used the maximal value obtained, namely, 300 trials in order to have +a confidence level of 95 %. + + +Table 1 presents the runtime average for each //n//, +the respective standard deviations, and +the respective confidence intervals given by +the average time [figs/brz/img167.png] the distance from average time +considering a confidence level of 95 %. +Observing the runtime averages one sees that +the algorithm runs in expected linear time, +as shown in [[3 #papers]]. + +%!include(html): ''TABLEBRZ1.t2t'' + | **Table 1:** Internal memory based algorithm: average time in seconds for constructing a MPHF, the standard deviation (SD), and the confidence intervals considering a confidence level of 95 %. + +Figure 6 presents the runtime for each trial. In addition, +the solid line corresponds to a linear regression model +obtained from the experimental measurements. +As we can see, the runtime for a given //n// has a considerable +fluctuation. However, the fluctuation also grows linearly with //n//. + + | [figs/brz/bmz_temporegressao.png] + | **Figure 6:** Time versus number of keys in //S// for the internal memory based algorithm. The solid line corresponds to a linear regression model. + +The observed fluctuation in the runtimes is as expected; recall that this +runtime has the form [figs/brz/img159.png] with //Z// a geometric random variable with +mean //1/p=e//. Thus, the runtime has mean [figs/brz/img181.png] and standard +deviation [figs/brz/img182.png]. +Therefore, the standard deviation also grows +linearly with //n//, as experimentally verified +in Table 1 and in Figure 6. + +---------------------------------------- + +===Performance of the External Memory Based Algorithm=== + +The runtime of the external memory based algorithm is also a random variable, +but now it follows a (highly concentrated) normal distribution, as we discuss at the end of this +section. Again, we are interested in verifying the linearity claim made in +[[2, Section 5.1 #papers]]. Therefore, we ran the algorithm for +several numbers //n// of keys in //S//. + +The values chosen for //n// were 1, 2, 4, 8, 16, 32, 64, 128, 512 and 1000 +million. +We limited the main memory in 500 megabytes for the experiments. +The size [figs/brz/img8.png] of the a priori reserved internal memory area +was set to 250 megabytes, the parameter //b// was set to //175// and +the building block algorithm parameter //c// was again set to //1//. +We show later on how [figs/brz/img8.png] affects the runtime of the algorithm. The other two parameters +have insignificant influence on the runtime. + +We again use a statistical method for determining a suitable sample size +to estimate the number of trials to be run for each value of //n//. We got that +just one trial for each //n// would be enough with a confidence level of 95 %. +However, we made 10 trials. This number of trials seems rather small, but, as +shown below, the behavior of our algorithm is very stable and its runtime is +almost deterministic (i.e., the standard deviation is very small). + +Table 2 presents the runtime average for each //n//, +the respective standard deviations, and +the respective confidence intervals given by +the average time [figs/brz/img167.png] the distance from average time +considering a confidence level of 95 %. +Observing the runtime averages we noticed that +the algorithm runs in expected linear time, +as shown in [[2, Section 5.1 #papers]]. Better still, +it is only approximately 60 % slower than the BMZ algorithm. +To get that value we used the linear regression model obtained for the runtime of +the internal memory based algorithm to estimate how much time it would require +for constructing a MPHF for a set of 1 billion keys. +We got 2.3 hours for the internal memory based algorithm and we measured +3.67 hours on average for the external memory based algorithm. +Increasing the size of the internal memory area +from 250 to 600 megabytes, +we have brought the time to 3.09 hours. In this case, the external memory based algorithm is +just 34 % slower in this setup. + +%!include(html): ''TABLEBRZ2.t2t'' + | **Table 2:**The external memory based algorithm: average time in seconds for constructing a MPHF, the standard deviation (SD), and the confidence intervals considering a confidence level of 95 %. + +Figure 7 presents the runtime for each trial. In addition, +the solid line corresponds to a linear regression model +obtained from the experimental measurements. +As we were expecting the runtime for a given //n// has almost no +variation. + + | [figs/brz/brz_temporegressao.png] + | **Figure 7:** Time versus number of keys in //S// for our algorithm. The solid line corresponds to a linear regression model. + +An intriguing observation is that the runtime of the algorithm is almost +deterministic, in spite of the fact that it uses as building block an +algorithm with a considerable fluctuation in its runtime. A given bucket +//i//, [figs/brz/img47.png], is a small set of keys (at most 256 keys) and, +as argued in last Section, the runtime of the +building block algorithm is a random variable [figs/brz/img207.png] with high fluctuation. +However, the runtime //Y// of the searching step of the external memory based algorithm is given +by [figs/brz/img209.png]. Under the hypothesis that +the [figs/brz/img207.png] are independent and bounded, the {\it law of large numbers} (see, +e.g., [[6 #papers]]) implies that the random variable [figs/brz/img210.png] converges +to a constant as [figs/brz/img83.png]. This explains why the runtime of our +algorithm is almost deterministic. + +---------------------------------------- + +=== Controlling disk accesses === + +In order to bring down the number of seek operations on disk +we benefit from the fact that our algorithm leaves almost all main +memory available to be used as disk I/O buffer. +In this section we evaluate how much the parameter [figs/brz/img8.png] affects the runtime of our algorithm. +For that we fixed //n// in 1 billion of URLs, +set the main memory of the machine used for the experiments +to 1 gigabyte and used [figs/brz/img8.png] equal to 100, 200, 300, 400, 500 and 600 +megabytes. + +Table 3 presents the number of files //N//, +the buffer size used for all files, the number of seeks in the worst case considering +the pessimistic assumption mentioned in [[2, Section 5.1 #papers]], and +the time to generate a MPHF for 1 billion of keys as a function of the amount of internal +memory available. Observing Table 3 we noticed that the time spent in the construction +decreases as the value of [figs/brz/img8.png] increases. However, for [figs/brz/img213.png], the variation +on the time is not as significant as for [figs/brz/img214.png]. +This can be explained by the fact that the kernel 2.6 I/O scheduler of Linux +has smart policies for avoiding seeks and diminishing the average seek time +(see [http://www.linuxjournal.com/article/6931 http://www.linuxjournal.com/article/6931]). + +%!include(html): ''TABLEBRZ3.t2t'' + | **Table 3:**Influence of the internal memory area size ([figs/brz/img8.png]) in the external memory based algorithm runtime. + + +---------------------------------------- + +==Papers==[papers] + ++ [F. C. Botelho http://www.dcc.ufmg.br/~fbotelho], D. Menoti, [N. Ziviani http://www.dcc.ufmg.br/~nivio]. [A New algorithm for constructing minimal perfect hash functions papers/bmz_tr004_04.ps], Technical Report TR004/04, Department of Computer Science, Federal University of Minas Gerais, 2004. + ++ [F. C. Botelho http://www.dcc.ufmg.br/~fbotelho], Y. Kohayakawa, [N. Ziviani http://www.dcc.ufmg.br/~nivio]. [An Approach for Minimal Perfect Hash Functions for Very Large Databases papers/tr06.pdf], Technical Report TR003/06, Department of Computer Science, Federal University of Minas Gerais, 2004. + ++ [F. C. Botelho http://www.dcc.ufmg.br/~fbotelho], Y. Kohayakawa, and [N. Ziviani http://www.dcc.ufmg.br/~nivio]. [A Practical Minimal Perfect Hashing Method papers/wea05.pdf]. //4th International Workshop on efficient and Experimental Algorithms (WEA05),// Springer-Verlag Lecture Notes in Computer Science, vol. 3505, Santorini Island, Greece, May 2005, 488-500. + ++ [M. Seltzer. Beyond relational databases. ACM Queue, 3(3), April 2005. http://acmqueue.com/modules.php?name=Content&pa=showpage&pid=299] + ++ [Bob Jenkins. Algorithm alley: Hash functions. Dr. Dobb's Journal of Software Tools, 22(9), september 1997. http://burtleburtle.net/bob/hash/doobs.html] + ++ R. Jain. The art of computer systems performance analysis: techniques for experimental design, measurement, simulation, and modeling. John Wiley, first edition, 1991. + + +%!include: ALGORITHMS.t2t + +%!include: FOOTER.t2t + +%!include(html): ''GOOGLEANALYTICS.t2t'' \ No newline at end of file diff --git a/CHD.t2t b/CHD.t2t new file mode 100644 index 0000000..f17a142 --- /dev/null +++ b/CHD.t2t @@ -0,0 +1,44 @@ +Compress, Hash and Displace: CHD Algorithm + + +%!includeconf: CONFIG.t2t + +---------------------------------------- +==Introduction== + +The important performance parameters of a PHF are representation size, evaluation time and construction time. The representation size plays an important role when the whole function fits in a faster memory and the actual data is stored in a slower memory. For instace, compact PHFs can be entirely fit in a CPU cache and this makes their computation really fast by avoiding cache misses. The CHD algorithm plays an important role in this context. It was designed by Djamal Belazzougui, Fabiano C. Botelho, and Martin Dietzfelbinger in [[2 #papers]]. + + +The CHD algorithm permits to obtain PHFs with representation size very close to optimal while retaining //O(n)// construction time and //O(1)// evaluation time. For example, in the case //m=2n// we obtain a PHF that uses space //0.67// bits per key, and for //m=1.23n// we obtain space //1.4// bits per key, which was not achievable with previously known methods. The CHD algorithm is inspired by several known algorithms; the main new feature is that it combines a modification of Pagh's ``hash-and-displace'' approach with data compression on a sequence of hash function indices. That combination makes it possible to significantly reduce space usage while retaining linear construction time and constant query time. The CHD algorithm can also be used for //k//-perfect hashing, where at most //k// keys may be mapped to the same value. For the analysis we assume that fully random hash functions are given for free; such assumptions can be justified and were made in previous papers. + +The compact PHFs generated by the CHD algorithm can be used in many applications in which we want to assign a unique identifier to each key without storing any information on the key. One of the most obvious applications of those functions (or //k//-perfect hash functions) is when we have a small fast memory in which we can store the perfect hash function while the keys and associated satellite data are stored in slower but larger memory. The size of a block or a transfer unit may be chosen so that //k// data items can be retrieved in one read access. In this case we can ensure that data associated with a key can be retrieved in a single probe to slower memory. This has been used for example in hardware routers [[4 #papers]]. + + +The CHD algorithm generates the most compact PHFs and MPHFs we know of in //O(n)// time. The time required to evaluate the generated functions is constant (in practice less than //1.4// microseconds). The storage space of the resulting PHFs and MPHFs are distant from the information theoretic lower bound by a factor of //1.43//. The closest competitor is the algorithm by Martin and Pagh [[3 #papers]] but their algorithm do not work in linear time. Furthermore, the CHD algorithm can be tuned to run faster than the BPZ algorithm [[1 #papers]] (the fastest algorithm available in the literature so far) and to obtain more compact functions. The most impressive characteristic is that it has the ability, in principle, to approximate the information theoretic lower bound while being practical. A detailed description of the CHD algorithm can be found in [[2 #papers]]. + + + +---------------------------------------- + +==Experimental Results== + +Experimental results comparing the CHD algorithm with [the BDZ algorithm bdz.html] +and others available in the CMPH library are presented in [[2 #papers]]. +---------------------------------------- + +==Papers==[papers] + ++ [F. C. Botelho http://www.dcc.ufmg.br/~fbotelho], [R. Pagh http://www.itu.dk/~pagh/], [N. Ziviani http://www.dcc.ufmg.br/~nivio]. [Simple and space-efficient minimal perfect hash functions papers/wads07.pdf]. //In Proceedings of the 10th International Workshop on Algorithms and Data Structures (WADs'07),// Springer-Verlag Lecture Notes in Computer Science, vol. 4619, Halifax, Canada, August 2007, 139-150. + ++ [F. C. Botelho http://www.dcc.ufmg.br/~fbotelho], D. Belazzougui and M. Dietzfelbinger. [Compress, hash and displace papers/esa09.pdf]. //In Proceedings of the 17th European Symposium on Algorithms (ESA’09)//. Springer LNCS, 2009. + ++ M. Dietzfelbinger and [R. Pagh http://www.itu.dk/~pagh/]. Succinct data structures for retrieval and approximate membership. //In Proceedings of the 35th international colloquium on Automata, Languages and Programming (ICALP’08)//, pages 385–396, Berlin, Heidelberg, 2008. Springer-Verlag. + ++ B. Prabhakar and F. Bonomi. Perfect hashing for network applications. //In Proceedings of the IEEE International Symposium on Information Theory//. IEEE Press, 2006. + + +%!include: ALGORITHMS.t2t + +%!include: FOOTER.t2t + +%!include(html): ''GOOGLEANALYTICS.t2t'' \ No newline at end of file diff --git a/CHM.t2t b/CHM.t2t new file mode 100644 index 0000000..adf9b30 --- /dev/null +++ b/CHM.t2t @@ -0,0 +1,88 @@ +CHM Algorithm + + +%!includeconf: CONFIG.t2t + +---------------------------------------- + +==The Algorithm== +The algorithm is presented in [[1,2,3 #papers]]. +---------------------------------------- + +==Memory Consumption== + +Now we detail the memory consumption to generate and to store minimal perfect hash functions +using the CHM algorithm. The structures responsible for memory consumption are in the +following: +- Graph: + + **first**: is a vector that stores //cn// integer numbers, each one representing + the first edge (index in the vector edges) in the list of + edges of each vertex. + The integer numbers are 4 bytes long. Therefore, + the vector first is stored in //4cn// bytes. + + + **edges**: is a vector to represent the edges of the graph. As each edge + is compounded by a pair of vertices, each entry stores two integer numbers + of 4 bytes that represent the vertices. As there are //n// edges, the + vector edges is stored in //8n// bytes. + + + **next**: given a vertex [figs/img139.png], we can discover the edges that + contain [figs/img139.png] following its list of edges, which starts on + first[[figs/img139.png]] and the next + edges are given by next[...first[[figs/img139.png]]...]. Therefore, + the vectors first and next represent + the linked lists of edges of each vertex. As there are two vertices for each edge, + when an edge is iserted in the graph, it must be inserted in the two linked lists + of the vertices in its composition. Therefore, there are //2n// entries of integer + numbers in the vector next, so it is stored in //4*2n = 8n// bytes. + +- Other auxiliary structures + + **visited**: is a vector of //cn// bits, where each bit indicates if the g value of + a given vertex was already defined. Therefore, the vector visited is stored + in //cn/8// bytes. + + + **function //g//**: is represented by a vector of //cn// integer numbers. + As each integer number is 4 bytes long, the function //g// is stored in + //4cn// bytes. + + +Thus, the total memory consumption of CHM algorithm for generating a minimal +perfect hash function (MPHF) is: //(8.125c + 16)n + O(1)// bytes. +As the value of constant //c// must be at least 2.09 we have: + || //c// | Memory consumption to generate a MPHF | + | 2.09 | //33.00n + O(1)// | + + | **Table 1:** Memory consumption to generate a MPHF using the CHM algorithm. + +Now we present the memory consumption to store the resulting function. +We only need to store the //g// function. Thus, we need //4cn// bytes. +Again we have: + || //c// | Memory consumption to store a MPHF | + | 2.09 | //8.36n// | + + | **Table 2:** Memory consumption to store a MPHF generated by the CHM algorithm. + +---------------------------------------- + +==Experimental Results== + +[CHM x BMZ comparison.html] + +---------------------------------------- + +==Papers==[papers] + ++ Z.J. Czech, G. Havas, and B.S. Majewski. [An optimal algorithm for generating minimal perfect hash functions. papers/chm92.pdf], Information Processing Letters, 43(5):257-264, 1992. + ++ Z.J. Czech, G. Havas, and B.S. Majewski. Fundamental study perfect hashing. + Theoretical Computer Science, 182:1-143, 1997. + ++ B.S. Majewski, N.C. Wormald, G. Havas, and Z.J. Czech. A family of perfect hashing methods. + The Computer Journal, 39(6):547--554, 1996. + + +%!include: ALGORITHMS.t2t + +%!include: FOOTER.t2t + +%!include(html): ''GOOGLEANALYTICS.t2t'' \ No newline at end of file diff --git a/COMPARISON.t2t b/COMPARISON.t2t new file mode 100644 index 0000000..d5aba53 --- /dev/null +++ b/COMPARISON.t2t @@ -0,0 +1,111 @@ +Comparison Between BMZ And CHM Algorithms + + +%!includeconf: CONFIG.t2t + +---------------------------------------- + +==Characteristics== +Table 1 presents the main characteristics of the two algorithms. +The number of edges in the graph [figs/img27.png] is [figs/img236.png], +the number of keys in the input set [figs/img20.png]. +The number of vertices of [figs/img32.png] is equal +to [figs/img12.png] and [figs/img237.png] for BMZ algorithm and the CHM algorithm, respectively. +This measure is related to the amount of space to store the array [figs/img37.png]. +This improves the space required to store a function in BMZ algorithm to [figs/img238.png] of the space required by the CHM algorithm. +The number of critical edges is [figs/img76.png] and 0, for BMZ algorithm and the CHM algorithm, +respectively. +BMZ algorithm generates random graphs that necessarily contains cycles and the +CHM algorithm +generates +acyclic random graphs. +Finally, the CHM algorithm generates [order preserving functions concepts.html] +while BMZ algorithm does not preserve order. + +%!include(html): ''TABLE1.t2t'' + | **Table 1:** Main characteristics of the algorithms. + +---------------------------------------- + +==Memory Consumption== + +- Memory consumption to generate the minimal perfect hash function (MPHF): + || Algorithm | //c// | Memory consumption to generate a MPHF | + | BMZ | 0.93 | //24.80n + O(1)// | + | BMZ | 1.15 | //26.42n + O(1)// | + | CHM | 2.09 | //33.00n + O(1)// | + + | **Table 2:** Memory consumption to generate a MPHF using the algorithms BMZ and CHM. + +- Memory consumption to store the resulting minimal perfect hash function (MPHF): + || Algorithm | //c// | Memory consumption to store a MPHF | + | BMZ | 0.93 | //3.72n// | + | BMZ | 1.15 | //4.60n// | + | CHM | 2.09 | //8.36n// | + + | **Table 3:** Memory consumption to store a MPHF generated by the algorithms BMZ and CHM. + +---------------------------------------- + +==Run times== +We now present some experimental results to compare the BMZ and CHM algorithms. +The data consists of a collection of 100 million universe resource locations +(URLs) collected from the Web. +The average length of a URL in the collection is 63 bytes. +All experiments were carried on +a computer running the Linux operating system, version 2.6.7, +with a 2.4 gigahertz processor and +4 gigabytes of main memory. + +Table 4 presents time measurements. +All times are in seconds. +The table entries represent averages over 50 trials. +The column labelled as [figs/img243.png] represents +the number of iterations to generate the random graph [figs/img32.png] in the +mapping step of the algorithms. +The next columns represent the run times +for the mapping plus ordering steps together and the searching +step for each algorithm. +The last column represents the percent gain of our algorithm +over the CHM algorithm. + +%!include(html): ''TABLE4.t2t'' + | **Table 4:** Time measurements for BMZ and the CHM algorithm. + +The mapping step of the BMZ algorithm is faster because +the expected number of iterations in the mapping step to generate [figs/img32.png] are +2.13 and 2.92 for BMZ algorithm and the CHM algorithm, respectively +(see [[2 bmz.html#papers]] for details). +The graph [figs/img32.png] generated by BMZ algorithm +has [figs/img12.png] vertices, against [figs/img237.png] for the CHM algorithm. +These two facts make BMZ algorithm faster in the mapping step. +The ordering step of BMZ algorithm is approximately equal to +the time to check if [figs/img32.png] is acyclic for the CHM algorithm. +The searching step of the CHM algorithm is faster, but the total +time of BMZ algorithm is, on average, approximately 59 % faster +than the CHM algorithm. +It is important to notice the times for the searching step: +for both algorithms they are not the dominant times, +and the experimental results clearly show +a linear behavior for the searching step. + +We now present run times for BMZ algorithm using a [heuristic bmz.html#heuristic] that +reduces the space requirement +to any given value between [figs/img12.png] words and [figs/img13.png] words. +For example, for [figs/img244.png] and [figs/img6.png], the analytical expected number +of iterations are [figs/img245.png] and [figs/img246.png], respectively +(for [figs/img247.png], the number of iterations are 2.78 for [figs/img244.png] and 3.04 +for [figs/img6.png]). +Table 5 presents the total times to construct a +function for [figs/img247.png], with an increase from [figs/img248.png] seconds +for [figs/img128.png] (see Table 4) to [figs/img249.png] seconds for [figs/img244.png] and +to [figs/img250.png] seconds for [figs/img6.png]. + +%!include(html): ''TABLE5.t2t'' + | **Table 5:** Time measurements for BMZ tuned algorithm with [figs/img5.png] and [figs/img6.png]. + +%!include: ALGORITHMS.t2t + +%!include: FOOTER.t2t + +%!include(html): ''GOOGLEANALYTICS.t2t'' \ No newline at end of file diff --git a/CONCEPTS.t2t b/CONCEPTS.t2t new file mode 100644 index 0000000..b8cb2c9 --- /dev/null +++ b/CONCEPTS.t2t @@ -0,0 +1,56 @@ +Minimal Perfect Hash Functions - Introduction + + +%!includeconf: CONFIG.t2t + +---------------------------------------- +==Basic Concepts== + +Suppose [figs/img14.png] is a universe of //keys//. +Let [figs/img15.png] be a //hash function// that maps the keys from [figs/img14.png] to a given interval of integers [figs/img16.png]. +Let [figs/img17.png] be a set of [figs/img8.png] keys from [figs/img14.png]. +Given a key [figs/img18.png], the hash function [figs/img7.png] computes an +integer in [figs/img19.png] for the storage or retrieval of [figs/img11.png] in +a //hash table//. +Hashing methods for //non-static sets// of keys can be used to construct +data structures storing [figs/img20.png] and supporting membership queries +"[figs/img18.png]?" in expected time [figs/img21.png]. +However, they involve a certain amount of wasted space owing to unused +locations in the table and waisted time to resolve collisions when +two keys are hashed to the same table location. + +For //static sets// of keys it is possible to compute a function +to find any key in a table in one probe; such hash functions are called +//perfect//. +More precisely, given a set of keys [figs/img20.png], we shall say that a +hash function [figs/img15.png] is a //perfect hash function// +for [figs/img20.png] if [figs/img7.png] is an injection on [figs/img20.png], +that is, there are no //collisions// among the keys in [figs/img20.png]: +if [figs/img11.png] and [figs/img22.png] are in [figs/img20.png] and [figs/img23.png], +then [figs/img24.png]. +Figure 1(a) illustrates a perfect hash function. +Since no collisions occur, each key can be retrieved from the table +with a single probe. +If [figs/img25.png], that is, the table has the same size as [figs/img20.png], +then we say that [figs/img7.png] is a //minimal perfect hash function// +for [figs/img20.png]. +Figure 1(b) illustrates a minimal perfect hash function. +Minimal perfect hash functions totally avoid the problem of wasted +space and time. A perfect hash function [figs/img7.png] is //order preserving// +if the keys in [figs/img20.png] are arranged in some given order +and [figs/img7.png] preserves this order in the hash table. + + | [figs/img26.png] + | **Figure 1:** (a) Perfect hash function. (b) Minimal perfect hash function. + +Minimal perfect hash functions are widely used for memory efficient +storage and fast retrieval of items from static sets, such as words in natural +languages, reserved words in programming languages or interactive systems, +universal resource locations (URLs) in Web search engines, or item sets in +data mining techniques. + +%!include: ALGORITHMS.t2t + +%!include: FOOTER.t2t + +%!include(html): ''GOOGLEANALYTICS.t2t'' \ No newline at end of file diff --git a/CONFIG.t2t b/CONFIG.t2t new file mode 100644 index 0000000..d3eb24f --- /dev/null +++ b/CONFIG.t2t @@ -0,0 +1,51 @@ +%! style(html): DOC.css +%! PreProc(html): '^%html% ' '' +%! PreProc(txt): '^%txt% ' '' +%! PostProc(html): "&" "&" +%! PostProc(txt): " " " " +%! PostProc(html): 'ALIGN="middle" SRC="figs/img7.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img7.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img57.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img57.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img32.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img32.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img20.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img20.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img60.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img60.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img62.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img62.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img79.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img79.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img139.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img139.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img140.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img140.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img143.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img143.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img115.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img115.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img11.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img11.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img169.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img169.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img96.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img96.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img178.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img178.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img180.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img180.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img183.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img183.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img189.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img189.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img196.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img196.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img172.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img172.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img8.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img8.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img1.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img1.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img14.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img14.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img128.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img128.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img112.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img112.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img12.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img12.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img13.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img13.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img244.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img244.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img245.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img245.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img246.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img246.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img15.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img15.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img25.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img25.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img168.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img168.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img6.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img6.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img5.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img5.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img28.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img28.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img237.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img237.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img248.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img237.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img248.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img237.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img249.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img249.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/img250.png"(.*?)>' 'ALIGN="bottom" SRC="figs/img250.png"\1>' +%! PostProc(html): 'ALIGN="middle" SRC="figs/bdz/img8.png"(.*?)>' 'ALIGN="bottom" SRC="figs/bdz/img8.png"\1>' +% The ^ need to be escaped by \ +%!postproc(html): \^\^(.*?)\^\^ \1 +%!postproc(html): ,,(.*?),, \1 + diff --git a/COPYING b/COPYING new file mode 100644 index 0000000..e114d20 --- /dev/null +++ b/COPYING @@ -0,0 +1,5 @@ +The code of the cmph library is dual licensed under the LGPL version 2 and MPL +1.1 licenses. Please refer to the LGPL-2 and MPL-1.1 files in the repository +for the full description of each of the licenses. + +For cxxmph, the files stringpiece.h and MurmurHash2 are covered by the BSD and MIT licenses, respectively. diff --git a/ChangeLog b/ChangeLog new file mode 100644 index 0000000..d846708 --- /dev/null +++ b/ChangeLog @@ -0,0 +1,453 @@ +2005-08-08 18:34 fc_botelho + + * INSTALL, examples/Makefile, examples/Makefile.in, + examples/.deps/file_adapter_ex2.Po, + examples/.deps/vector_adapter_ex1.Po, src/brz.c: [no log message] + +2005-08-07 22:00 fc_botelho + + * src/: brz.c, brz.h, brz_structs.h, cmph.c, cmph.h, main.c: + temporary directory passed by command line + +2005-08-07 20:22 fc_botelho + + * src/brz.c: stable version of BRZ + +2005-08-06 22:09 fc_botelho + + * src/bmz.c: no message + +2005-08-06 22:02 fc_botelho + + * src/bmz.c: no message + +2005-08-06 21:45 fc_botelho + + * src/brz.c: fastest version of BRZ + +2005-08-06 17:20 fc_botelho + + * src/: bmz.c, brz.c, main.c: [no log message] + +2005-07-29 16:43 fc_botelho + + * src/brz.c: BRZ algorithm is almost stable + +2005-07-29 15:29 fc_botelho + + * src/: bmz.c, brz.c, brz_structs.h, cmph_types.h: BRZ algorithm is + almost stable + +2005-07-29 00:09 fc_botelho + + * src/: brz.c, djb2_hash.c, djb2_hash.h, fnv_hash.c, fnv_hash.h, + hash.c, hash.h, jenkins_hash.c, jenkins_hash.h, sdbm_hash.c, + sdbm_hash.h: it was fixed more mistakes in BRZ algorithm + +2005-07-28 21:00 fc_botelho + + * src/: bmz.c, brz.c, cmph.c: fixed some mistakes in BRZ algorithm + +2005-07-27 19:13 fc_botelho + + * src/brz.c: algorithm BRZ included + +2005-07-27 18:16 fc_botelho + + * src/: bmz_structs.h, brz.c, brz.h, brz_structs.h: Algorithm BRZ + included + +2005-07-27 18:13 fc_botelho + + * src/: Makefile.am, bmz.c, chm.c, cmph.c, cmph.h, cmph_types.h: + Algorithm BRZ included + +2005-07-25 19:18 fc_botelho + + * README, README.t2t, scpscript: it was included an examples + directory + +2005-07-25 18:26 fc_botelho + + * INSTALL, Makefile.am, configure.ac, examples/Makefile, + examples/Makefile.am, examples/Makefile.in, + examples/file_adapter_ex2.c, examples/keys.txt, + examples/vector_adapter_ex1.c, examples/.deps/file_adapter_ex2.Po, + examples/.deps/vector_adapter_ex1.Po, src/cmph.c, src/cmph.h: it + was included a examples directory + +2005-03-03 02:07 davi + + * src/: bmz.c, chm.c, chm.h, chm_structs.h, cmph.c, cmph.h, + graph.c, graph.h, jenkins_hash.c, jenkins_hash.h, main.c (xgraph): + New f*cking cool algorithm works. Roughly implemented in chm.c + +2005-03-02 20:55 davi + + * src/xgraph.c (xgraph): xchmr working nice, but a bit slow + +2005-03-02 02:01 davi + + * src/xchmr.h: file xchmr.h was initially added on branch xgraph. + +2005-03-02 02:01 davi + + * src/xchmr_structs.h: file xchmr_structs.h was initially added on + branch xgraph. + +2005-03-02 02:01 davi + + * src/xchmr.c: file xchmr.c was initially added on branch xgraph. + +2005-03-02 02:01 davi + + * src/: Makefile.am, cmph.c, cmph_types.h, xchmr.c, xchmr.h, + xchmr_structs.h, xgraph.c, xgraph.h (xgraph): xchmr working fine + except for false positives on cyclic detection. + +2005-03-02 00:05 davi + + * src/: Makefile.am, xgraph.c, xgraph.h (xgraph): Added external + graph functionality in branch xgraph. + +2005-03-02 00:05 davi + + * src/xgraph.c: file xgraph.c was initially added on branch xgraph. + +2005-03-02 00:05 davi + + * src/xgraph.h: file xgraph.h was initially added on branch xgraph. + +2005-02-28 19:53 davi + + * src/chm.c: Fixed off by one bug in chm. + +2005-02-17 16:20 fc_botelho + + * LOGO.html, README, README.t2t, gendocs: The way of calling the + function cmph_search was fixed in the file README.t2t + +2005-01-31 17:13 fc_botelho + + * README.t2t: Heuristic BMZ memory consumption was updated + +2005-01-31 17:09 fc_botelho + + * BMZ.t2t: DJB2, SDBM, FNV and Jenkins hash link were added + +2005-01-31 16:50 fc_botelho + + * BMZ.t2t, CHM.t2t, COMPARISON.t2t, CONCEPTS.t2t, CONFIG.t2t, + FAQ.t2t, GPERF.t2t, LOGO.t2t, README.t2t, TABLE1.t2t, TABLE4.t2t, + TABLE5.t2t, DOC.css: BMZ documentation was finished + +2005-01-28 18:12 fc_botelho + + * figs/img1.png, figs/img10.png, figs/img100.png, figs/img101.png, + figs/img102.png, figs/img103.png, figs/img104.png, figs/img105.png, + figs/img106.png, figs/img107.png, figs/img108.png, figs/img109.png, + papers/bmz_tr004_04.ps, papers/bmz_wea2005.ps, papers/chm92.pdf, + figs/img11.png, figs/img110.png, figs/img111.png, figs/img112.png, + figs/img113.png, figs/img114.png, figs/img115.png, figs/img116.png, + figs/img117.png, figs/img118.png, figs/img119.png, figs/img12.png, + figs/img120.png, figs/img121.png, figs/img122.png, figs/img123.png, + figs/img124.png, figs/img125.png, figs/img126.png, figs/img127.png, + figs/img128.png, figs/img129.png, figs/img13.png, figs/img130.png, + figs/img131.png, figs/img132.png, figs/img133.png, figs/img134.png, + figs/img135.png, figs/img136.png, figs/img137.png, figs/img138.png, + figs/img139.png, figs/img14.png, figs/img140.png, figs/img141.png, + figs/img142.png, figs/img143.png, figs/img144.png, figs/img145.png, + figs/img146.png, figs/img147.png, figs/img148.png, figs/img149.png, + figs/img15.png, figs/img150.png, figs/img151.png, figs/img152.png, + figs/img153.png, figs/img154.png, figs/img155.png, figs/img156.png, + figs/img157.png, figs/img158.png, figs/img159.png, figs/img16.png, + figs/img160.png, figs/img161.png, figs/img162.png, figs/img163.png, + figs/img164.png, figs/img165.png, figs/img166.png, figs/img167.png, + figs/img168.png, figs/img169.png, figs/img17.png, figs/img170.png, + figs/img171.png, figs/img172.png, figs/img173.png, figs/img174.png, + figs/img175.png, figs/img176.png, figs/img177.png, figs/img178.png, + figs/img179.png, figs/img18.png, figs/img180.png, figs/img181.png, + figs/img182.png, figs/img183.png, figs/img184.png, figs/img185.png, + figs/img186.png, figs/img187.png, figs/img188.png, figs/img189.png, + figs/img19.png, figs/img190.png, figs/img191.png, figs/img192.png, + figs/img193.png, figs/img194.png, figs/img195.png, figs/img196.png, + figs/img197.png, figs/img198.png, figs/img199.png, figs/img2.png, + figs/img20.png, figs/img200.png, figs/img201.png, figs/img202.png, + figs/img203.png, figs/img204.png, figs/img205.png, figs/img206.png, + figs/img207.png, figs/img208.png, figs/img209.png, figs/img21.png, + figs/img210.png, figs/img211.png, figs/img212.png, figs/img213.png, + figs/img214.png, figs/img215.png, figs/img216.png, figs/img217.png, + figs/img218.png, figs/img219.png, figs/img22.png, figs/img220.png, + figs/img221.png, figs/img222.png, figs/img223.png, figs/img224.png, + figs/img225.png, figs/img226.png, figs/img227.png, figs/img228.png, + figs/img229.png, figs/img23.png, figs/img230.png, figs/img231.png, + figs/img232.png, figs/img233.png, figs/img234.png, figs/img235.png, + figs/img236.png, figs/img237.png, figs/img238.png, figs/img239.png, + figs/img24.png, figs/img240.png, figs/img241.png, figs/img242.png, + figs/img243.png, figs/img244.png, figs/img245.png, figs/img246.png, + figs/img247.png, figs/img248.png, figs/img249.png, figs/img25.png, + figs/img250.png, figs/img251.png, figs/img252.png, figs/img253.png, + figs/img26.png, figs/img27.png, figs/img28.png, figs/img29.png, + figs/img3.png, figs/img30.png, figs/img31.png, figs/img32.png, + figs/img33.png, figs/img34.png, figs/img35.png, figs/img36.png, + figs/img37.png, figs/img38.png, figs/img39.png, figs/img4.png, + figs/img40.png, figs/img41.png, figs/img42.png, figs/img43.png, + figs/img44.png, figs/img45.png, figs/img46.png, figs/img47.png, + figs/img48.png, figs/img49.png, figs/img5.png, figs/img50.png, + figs/img51.png, figs/img52.png, figs/img53.png, figs/img54.png, + figs/img55.png, figs/img56.png, figs/img57.png, figs/img58.png, + figs/img59.png, figs/img6.png, figs/img60.png, figs/img61.png, + figs/img62.png, figs/img63.png, figs/img64.png, figs/img65.png, + figs/img66.png, figs/img67.png, figs/img68.png, figs/img69.png, + figs/img7.png, figs/img70.png, figs/img71.png, figs/img72.png, + figs/img73.png, figs/img74.png, figs/img75.png, figs/img76.png, + figs/img77.png, figs/img78.png, figs/img79.png, figs/img8.png, + figs/img80.png, figs/img81.png, figs/img82.png, figs/img83.png, + figs/img84.png, figs/img85.png, figs/img86.png, figs/img87.png, + figs/img88.png, figs/img89.png, figs/img9.png, figs/img90.png, + figs/img91.png, figs/img92.png, figs/img93.png, figs/img94.png, + figs/img95.png, figs/img96.png, figs/img97.png, figs/img98.png, + figs/img99.png: Initial version + +2005-01-28 18:07 fc_botelho + + * BMZ.t2t, CHM.t2t, COMPARISON.t2t, CONFIG.t2t, README.t2t: It was + improved the documentation of BMZ and CHM algorithms + +2005-01-27 18:07 fc_botelho + + * BMZ.t2t, CHM.t2t, FAQ.t2t: history of BMZ algorithm is available + +2005-01-27 14:23 fc_botelho + + * AUTHORS: It was added the authors' email + +2005-01-27 14:21 fc_botelho + + * BMZ.t2t, CHM.t2t, COMPARISON.t2t, FAQ.t2t, FOOTER.t2t, GPERF.t2t, + README.t2t: It was added FOOTER.t2t file + +2005-01-27 12:16 fc_botelho + + * src/cmph_types.h: It was removed pjw and glib functions from + cmph_hash_names vector + +2005-01-27 12:12 fc_botelho + + * src/hash.c: It was removed pjw and glib functions from + cmph_hash_names vector + +2005-01-27 11:01 davi + + * FAQ.t2t, README, README.t2t, gendocs, src/bmz.c, src/bmz.h, + src/chm.c, src/chm.h, src/cmph.c, src/cmph_structs.c, src/debug.h, + src/main.c: Fix to alternate hash functions code. Removed htonl + stuff from chm algorithm. Added faq. + +2005-01-27 09:14 fc_botelho + + * README.t2t: It was corrected some formatting mistakes + +2005-01-26 22:04 davi + + * BMZ.t2t, CHM.t2t, COMPARISON.t2t, GPERF.t2t, README, README.t2t, + gendocs: Added gperf notes. + +2005-01-25 19:10 fc_botelho + + * INSTALL: generated in version 0.3 + +2005-01-25 19:09 fc_botelho + + * src/: czech.c, czech.h, czech_structs.h: The czech.h, + czech_structs.h and czech.c files were removed + +2005-01-25 19:06 fc_botelho + + * src/: chm.c, chm.h, chm_structs.h, cmph.c, cmph_types.h, main.c, + Makefile.am: It was changed the prefix czech by chm + +2005-01-25 18:50 fc_botelho + + * gendocs: script to generate the documentation and the README file + +2005-01-25 18:47 fc_botelho + + * README: README was updated + +2005-01-25 18:44 fc_botelho + + * configure.ac: Version was updated + +2005-01-25 18:42 fc_botelho + + * src/cmph.h: Vector adapter commented + +2005-01-25 18:40 fc_botelho + + * CHM.t2t, CONFIG.t2t, LOGO.html: It was included the PreProc macro + through the CONFIG.t2t file and the LOGO through the LOGO.html file + +2005-01-25 18:33 fc_botelho + + * README.t2t, BMZ.t2t, COMPARISON.t2t, CZECH.t2t: It was included + the PreProc macro through the CONFIG.t2t file and the LOGO through + the LOGO.html file + +2005-01-24 18:25 fc_botelho + + * src/: bmz.c, bmz.h, cmph_structs.c, cmph_structs.h, czech.c, + cmph.c, czech.h, main.c, cmph.h: The file adpater was implemented. + +2005-01-24 17:20 fc_botelho + + * README.t2t: the memory consumption to create a mphf using bmz + with a heuristic was fixed. + +2005-01-24 17:11 fc_botelho + + * src/: cmph_types.h, main.c: The algorithms and hash functions + were put in alphabetical order + +2005-01-24 16:15 fc_botelho + + * BMZ.t2t, COMPARISON.t2t, CZECH.t2t, README.t2t: It was fixed some + English mistakes and It was included the files BMZ.t2t, CZECH.t2t + and COMPARISON.t2t + +2005-01-21 19:19 davi + + * ChangeLog, Doxyfile: Added Doxyfile. + +2005-01-21 19:14 davi + + * README.t2t, wingetopt.c, src/cmph.h, tests/graph_tests.c: Fixed + wingetopt.c + +2005-01-21 18:44 fc_botelho + + * src/Makefile.am: included files bitbool.h and bitbool.c + +2005-01-21 18:42 fc_botelho + + * src/: bmz.c, bmz.h, bmz_structs.h, cmph.c, cmph.h, + cmph_structs.c, cmph_structs.h, czech.c, czech.h, czech_structs.h, + djb2_hash.c, djb2_hash.h, fnv_hash.c, fnv_hash.h, graph.c, graph.h, + hash.c, hash.h, hash_state.h, jenkins_hash.c, jenkins_hash.h, + main.c, sdbm_hash.c, sdbm_hash.h, vqueue.c, vqueue.h, vstack.c, + vstack.h: Only public symbols were prefixed with cmph, and the API + was changed to agree with the initial txt2html documentation + +2005-01-21 18:30 fc_botelho + + * src/: bitbool.c, bitbool.h: mask to represent a boolean value + using only 1 bit + +2005-01-20 10:28 davi + + * ChangeLog, README, README.t2t, wingetopt.h, src/main.c: Added + initial txt2tags documentation. + +2005-01-19 10:40 davi + + * acinclude.m4, configure.ac: Added macros for large file support. + +2005-01-18 19:06 fc_botelho + + * src/: bmz.c, bmz.h, bmz_structs.h, cmph.c, cmph.h, + cmph_structs.c, cmph_structs.h, cmph_types.h, czech.c, czech.h, + czech_structs.h, djb2_hash.c, djb2_hash.h, fnv_hash.c, fnv_hash.h, + graph.c, graph.h, hash.c, hash.h, hash_state.h, jenkins_hash.c, + jenkins_hash.h, main.c, sdbm_hash.c, sdbm_hash.h, vqueue.c, + vqueue.h, vstack.c, vstack.h: version with cmph prefix + +2005-01-18 15:10 davi + + * ChangeLog, cmph.vcproj, cmphapp.vcproj, wingetopt.c, wingetopt.h: + Added missing files. + +2005-01-18 14:25 fc_botelho + + * aclocal.m4: initial version + +2005-01-18 14:16 fc_botelho + + * aclocal.m4: initial version + +2005-01-18 13:58 fc_botelho + + * src/czech.c: using bit mask to represent boolean values + +2005-01-18 13:56 fc_botelho + + * src/czech.c: no message + +2005-01-18 10:18 davi + + * COPYING, INSTALL, src/Makefile.am, src/bmz.c, src/bmz.h, + src/cmph.c, src/cmph.h, src/cmph_structs.c, src/cmph_structs.h, + src/czech.c, src/czech.h, src/debug.h, src/djb2_hash.c, + src/graph.c, src/graph.h, src/hash.c, src/jenkins_hash.c, + src/main.c, src/sdbm_hash.c, src/vqueue.c: Fixed a lot of warnings. + Added visual studio project. Make needed changes to work with + windows. + +2005-01-17 16:01 fc_botelho + + * src/main.c: stable version + +2005-01-17 15:58 fc_botelho + + * src/: bmz.c, cmph.c, cmph.h, graph.c: stable version + +2005-01-13 21:56 davi + + * src/czech.c: Better error handling in czech.c. + +2005-01-05 18:45 fc_botelho + + * src/cmph_structs.c: included option -k to specify the number of + keys to use + +2005-01-05 17:48 fc_botelho + + * src/: cmph.c, main.c: included option -k to specify the number of + keys to use + +2005-01-03 19:38 fc_botelho + + * src/bmz.c: using less memory + +2005-01-03 18:47 fc_botelho + + * src/: bmz.c, graph.c: using less space to store the used_edges + and critical_nodes arrays + +2004-12-23 11:16 davi + + * INSTALL, COPYING, AUTHORS, ChangeLog, Makefile.am, NEWS, README, + cmph.spec, configure.ac, src/graph.c, tests/Makefile.am, + tests/graph_tests.c, src/bmz.c, src/cmph_types.h, + src/czech_structs.h, src/hash_state.h, src/jenkins_hash.c, + src/bmz_structs.h, src/cmph.c, src/cmph.h, src/cmph_structs.h, + src/czech.c, src/debug.h, src/djb2_hash.c, src/djb2_hash.h, + src/fnv_hash.c, src/fnv_hash.h, src/graph.h, src/hash.c, + src/hash.h, src/jenkins_hash.h, src/sdbm_hash.c, src/vstack.h, + src/Makefile.am, src/bmz.h, src/cmph_structs.c, src/czech.h, + src/main.c, src/sdbm_hash.h, src/vqueue.c, src/vqueue.h, + src/vstack.c: Initial release. + +2004-12-23 11:16 davi + + * INSTALL, COPYING, AUTHORS, ChangeLog, Makefile.am, NEWS, README, + cmph.spec, configure.ac, src/graph.c, tests/Makefile.am, + tests/graph_tests.c, src/bmz.c, src/cmph_types.h, + src/czech_structs.h, src/hash_state.h, src/jenkins_hash.c, + src/bmz_structs.h, src/cmph.c, src/cmph.h, src/cmph_structs.h, + src/czech.c, src/debug.h, src/djb2_hash.c, src/djb2_hash.h, + src/fnv_hash.c, src/fnv_hash.h, src/graph.h, src/hash.c, + src/hash.h, src/jenkins_hash.h, src/sdbm_hash.c, src/vstack.h, + src/Makefile.am, src/bmz.h, src/cmph_structs.c, src/czech.h, + src/main.c, src/sdbm_hash.h, src/vqueue.c, src/vqueue.h, + src/vstack.c: Initial revision + diff --git a/DOC.css b/DOC.css new file mode 100644 index 0000000..db09b2d --- /dev/null +++ b/DOC.css @@ -0,0 +1,33 @@ +/* implement both fixed-size and relative sizes */ +SMALL.XTINY { } +SMALL.TINY { } +SMALL.SCRIPTSIZE { } +BODY { font-size: 13 } +TD { font-size: 13 } +SMALL.FOOTNOTESIZE { font-size: 13 } +SMALL.SMALL { } +BIG.LARGE { } +BIG.XLARGE { } +BIG.XXLARGE { } +BIG.HUGE { } +BIG.XHUGE { } + +/* heading styles */ +H1 { } +H2 { } +H3 { } +H4 { } +H5 { } + + +/* mathematics styles */ +DIV.displaymath { } /* math displays */ +TD.eqno { } /* equation-number cells */ + + +/* document-specific styles come next */ +DIV.navigation { } +DIV.center { } +SPAN.textit { font-style: italic } +SPAN.arabic { } +SPAN.eqn-number { } diff --git a/Doxyfile b/Doxyfile new file mode 100644 index 0000000..aa0402f --- /dev/null +++ b/Doxyfile @@ -0,0 +1,1153 @@ +# Doxyfile 1.3.8 + +# This file describes the settings to be used by the documentation system +# doxygen (www.doxygen.org) for a project +# +# All text after a hash (#) is considered a comment and will be ignored +# The format is: +# TAG = value [value, ...] +# For lists items can also be appended using: +# TAG += value [value, ...] +# Values that contain spaces should be placed between quotes (" ") + +#--------------------------------------------------------------------------- +# Project related configuration options +#--------------------------------------------------------------------------- + +# The PROJECT_NAME tag is a single word (or a sequence of words surrounded +# by quotes) that should identify the project. + +PROJECT_NAME = cmph + +# The PROJECT_NUMBER tag can be used to enter a project or revision number. +# This could be handy for archiving the generated documentation or +# if some version control system is used. + +PROJECT_NUMBER = + +# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) +# base path where the generated documentation will be put. +# If a relative path is entered, it will be relative to the location +# where doxygen was started. If left blank the current directory will be used. + +OUTPUT_DIRECTORY = docs + +# If the CREATE_SUBDIRS tag is set to YES, then doxygen will create +# 4096 sub-directories (in 2 levels) under the output directory of each output +# format and will distribute the generated files over these directories. +# Enabling this option can be useful when feeding doxygen a huge amount of source +# files, where putting all generated files in the same directory would otherwise +# cause performance problems for the file system. + +CREATE_SUBDIRS = NO + +# The OUTPUT_LANGUAGE tag is used to specify the language in which all +# documentation generated by doxygen is written. Doxygen will use this +# information to generate all constant output in the proper language. +# The default language is English, other supported languages are: +# Brazilian, Catalan, Chinese, Chinese-Traditional, Croatian, Czech, Danish, +# Dutch, Finnish, French, German, Greek, Hungarian, Italian, Japanese, +# Japanese-en (Japanese with English messages), Korean, Korean-en, Norwegian, +# Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovene, Spanish, +# Swedish, and Ukrainian. + +OUTPUT_LANGUAGE = English + +# This tag can be used to specify the encoding used in the generated output. +# The encoding is not always determined by the language that is chosen, +# but also whether or not the output is meant for Windows or non-Windows users. +# In case there is a difference, setting the USE_WINDOWS_ENCODING tag to YES +# forces the Windows encoding (this is the default for the Windows binary), +# whereas setting the tag to NO uses a Unix-style encoding (the default for +# all platforms other than Windows). + +USE_WINDOWS_ENCODING = NO + +# If the BRIEF_MEMBER_DESC tag is set to YES (the default) Doxygen will +# include brief member descriptions after the members that are listed in +# the file and class documentation (similar to JavaDoc). +# Set to NO to disable this. + +BRIEF_MEMBER_DESC = YES + +# If the REPEAT_BRIEF tag is set to YES (the default) Doxygen will prepend +# the brief description of a member or function before the detailed description. +# Note: if both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the +# brief descriptions will be completely suppressed. + +REPEAT_BRIEF = YES + +# This tag implements a quasi-intelligent brief description abbreviator +# that is used to form the text in various listings. Each string +# in this list, if found as the leading text of the brief description, will be +# stripped from the text and the result after processing the whole list, is used +# as the annotated text. Otherwise, the brief description is used as-is. If left +# blank, the following values are used ("$name" is automatically replaced with the +# name of the entity): "The $name class" "The $name widget" "The $name file" +# "is" "provides" "specifies" "contains" "represents" "a" "an" "the" + +ABBREVIATE_BRIEF = + +# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then +# Doxygen will generate a detailed section even if there is only a brief +# description. + +ALWAYS_DETAILED_SEC = NO + +# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all inherited +# members of a class in the documentation of that class as if those members were +# ordinary class members. Constructors, destructors and assignment operators of +# the base classes will not be shown. + +INLINE_INHERITED_MEMB = NO + +# If the FULL_PATH_NAMES tag is set to YES then Doxygen will prepend the full +# path before files name in the file list and in the header files. If set +# to NO the shortest path that makes the file name unique will be used. + +FULL_PATH_NAMES = YES + +# If the FULL_PATH_NAMES tag is set to YES then the STRIP_FROM_PATH tag +# can be used to strip a user-defined part of the path. Stripping is +# only done if one of the specified strings matches the left-hand part of +# the path. The tag can be used to show relative paths in the file list. +# If left blank the directory from which doxygen is run is used as the +# path to strip. + +STRIP_FROM_PATH = + +# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of +# the path mentioned in the documentation of a class, which tells +# the reader which header file to include in order to use a class. +# If left blank only the name of the header file containing the class +# definition is used. Otherwise one should specify the include paths that +# are normally passed to the compiler using the -I flag. + +STRIP_FROM_INC_PATH = + +# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter +# (but less readable) file names. This can be useful is your file systems +# doesn't support long names like on DOS, Mac, or CD-ROM. + +SHORT_NAMES = NO + +# If the JAVADOC_AUTOBRIEF tag is set to YES then Doxygen +# will interpret the first line (until the first dot) of a JavaDoc-style +# comment as the brief description. If set to NO, the JavaDoc +# comments will behave just like the Qt-style comments (thus requiring an +# explicit @brief command for a brief description. + +JAVADOC_AUTOBRIEF = NO + +# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make Doxygen +# treat a multi-line C++ special comment block (i.e. a block of //! or /// +# comments) as a brief description. This used to be the default behaviour. +# The new default is to treat a multi-line C++ comment block as a detailed +# description. Set this tag to YES if you prefer the old behaviour instead. + +MULTILINE_CPP_IS_BRIEF = NO + +# If the DETAILS_AT_TOP tag is set to YES then Doxygen +# will output the detailed description near the top, like JavaDoc. +# If set to NO, the detailed description appears after the member +# documentation. + +DETAILS_AT_TOP = NO + +# If the INHERIT_DOCS tag is set to YES (the default) then an undocumented +# member inherits the documentation from any documented member that it +# re-implements. + +INHERIT_DOCS = YES + +# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC +# tag is set to YES, then doxygen will reuse the documentation of the first +# member in the group (if any) for the other members of the group. By default +# all members of a group must be documented explicitly. + +DISTRIBUTE_GROUP_DOC = NO + +# The TAB_SIZE tag can be used to set the number of spaces in a tab. +# Doxygen uses this value to replace tabs by spaces in code fragments. + +TAB_SIZE = 8 + +# This tag can be used to specify a number of aliases that acts +# as commands in the documentation. An alias has the form "name=value". +# For example adding "sideeffect=\par Side Effects:\n" will allow you to +# put the command \sideeffect (or @sideeffect) in the documentation, which +# will result in a user-defined paragraph with heading "Side Effects:". +# You can put \n's in the value part of an alias to insert newlines. + +ALIASES = + +# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources +# only. Doxygen will then generate output that is more tailored for C. +# For instance, some of the names that are used will be different. The list +# of all members will be omitted, etc. + +OPTIMIZE_OUTPUT_FOR_C = YES + +# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java sources +# only. Doxygen will then generate output that is more tailored for Java. +# For instance, namespaces will be presented as packages, qualified scopes +# will look different, etc. + +OPTIMIZE_OUTPUT_JAVA = NO + +# Set the SUBGROUPING tag to YES (the default) to allow class member groups of +# the same type (for instance a group of public functions) to be put as a +# subgroup of that type (e.g. under the Public Functions section). Set it to +# NO to prevent subgrouping. Alternatively, this can be done per class using +# the \nosubgrouping command. + +SUBGROUPING = YES + +#--------------------------------------------------------------------------- +# Build related configuration options +#--------------------------------------------------------------------------- + +# If the EXTRACT_ALL tag is set to YES doxygen will assume all entities in +# documentation are documented, even if no documentation was available. +# Private class members and static file members will be hidden unless +# the EXTRACT_PRIVATE and EXTRACT_STATIC tags are set to YES + +EXTRACT_ALL = NO + +# If the EXTRACT_PRIVATE tag is set to YES all private members of a class +# will be included in the documentation. + +EXTRACT_PRIVATE = NO + +# If the EXTRACT_STATIC tag is set to YES all static members of a file +# will be included in the documentation. + +EXTRACT_STATIC = NO + +# If the EXTRACT_LOCAL_CLASSES tag is set to YES classes (and structs) +# defined locally in source files will be included in the documentation. +# If set to NO only classes defined in header files are included. + +EXTRACT_LOCAL_CLASSES = YES + +# This flag is only useful for Objective-C code. When set to YES local +# methods, which are defined in the implementation section but not in +# the interface are included in the documentation. +# If set to NO (the default) only methods in the interface are included. + +EXTRACT_LOCAL_METHODS = NO + +# If the HIDE_UNDOC_MEMBERS tag is set to YES, Doxygen will hide all +# undocumented members of documented classes, files or namespaces. +# If set to NO (the default) these members will be included in the +# various overviews, but no documentation section is generated. +# This option has no effect if EXTRACT_ALL is enabled. + +HIDE_UNDOC_MEMBERS = NO + +# If the HIDE_UNDOC_CLASSES tag is set to YES, Doxygen will hide all +# undocumented classes that are normally visible in the class hierarchy. +# If set to NO (the default) these classes will be included in the various +# overviews. This option has no effect if EXTRACT_ALL is enabled. + +HIDE_UNDOC_CLASSES = NO + +# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, Doxygen will hide all +# friend (class|struct|union) declarations. +# If set to NO (the default) these declarations will be included in the +# documentation. + +HIDE_FRIEND_COMPOUNDS = NO + +# If the HIDE_IN_BODY_DOCS tag is set to YES, Doxygen will hide any +# documentation blocks found inside the body of a function. +# If set to NO (the default) these blocks will be appended to the +# function's detailed documentation block. + +HIDE_IN_BODY_DOCS = NO + +# The INTERNAL_DOCS tag determines if documentation +# that is typed after a \internal command is included. If the tag is set +# to NO (the default) then the documentation will be excluded. +# Set it to YES to include the internal documentation. + +INTERNAL_DOCS = NO + +# If the CASE_SENSE_NAMES tag is set to NO then Doxygen will only generate +# file names in lower-case letters. If set to YES upper-case letters are also +# allowed. This is useful if you have classes or files whose names only differ +# in case and if your file system supports case sensitive file names. Windows +# and Mac users are advised to set this option to NO. + +CASE_SENSE_NAMES = YES + +# If the HIDE_SCOPE_NAMES tag is set to NO (the default) then Doxygen +# will show members with their full class and namespace scopes in the +# documentation. If set to YES the scope will be hidden. + +HIDE_SCOPE_NAMES = NO + +# If the SHOW_INCLUDE_FILES tag is set to YES (the default) then Doxygen +# will put a list of the files that are included by a file in the documentation +# of that file. + +SHOW_INCLUDE_FILES = YES + +# If the INLINE_INFO tag is set to YES (the default) then a tag [inline] +# is inserted in the documentation for inline members. + +INLINE_INFO = YES + +# If the SORT_MEMBER_DOCS tag is set to YES (the default) then doxygen +# will sort the (detailed) documentation of file and class members +# alphabetically by member name. If set to NO the members will appear in +# declaration order. + +SORT_MEMBER_DOCS = YES + +# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the +# brief documentation of file, namespace and class members alphabetically +# by member name. If set to NO (the default) the members will appear in +# declaration order. + +SORT_BRIEF_DOCS = NO + +# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be +# sorted by fully-qualified names, including namespaces. If set to +# NO (the default), the class list will be sorted only by class name, +# not including the namespace part. +# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES. +# Note: This option applies only to the class list, not to the +# alphabetical list. + +SORT_BY_SCOPE_NAME = NO + +# The GENERATE_TODOLIST tag can be used to enable (YES) or +# disable (NO) the todo list. This list is created by putting \todo +# commands in the documentation. + +GENERATE_TODOLIST = YES + +# The GENERATE_TESTLIST tag can be used to enable (YES) or +# disable (NO) the test list. This list is created by putting \test +# commands in the documentation. + +GENERATE_TESTLIST = YES + +# The GENERATE_BUGLIST tag can be used to enable (YES) or +# disable (NO) the bug list. This list is created by putting \bug +# commands in the documentation. + +GENERATE_BUGLIST = YES + +# The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or +# disable (NO) the deprecated list. This list is created by putting +# \deprecated commands in the documentation. + +GENERATE_DEPRECATEDLIST= YES + +# The ENABLED_SECTIONS tag can be used to enable conditional +# documentation sections, marked by \if sectionname ... \endif. + +ENABLED_SECTIONS = + +# The MAX_INITIALIZER_LINES tag determines the maximum number of lines +# the initial value of a variable or define consists of for it to appear in +# the documentation. If the initializer consists of more lines than specified +# here it will be hidden. Use a value of 0 to hide initializers completely. +# The appearance of the initializer of individual variables and defines in the +# documentation can be controlled using \showinitializer or \hideinitializer +# command in the documentation regardless of this setting. + +MAX_INITIALIZER_LINES = 30 + +# Set the SHOW_USED_FILES tag to NO to disable the list of files generated +# at the bottom of the documentation of classes and structs. If set to YES the +# list will mention the files that were used to generate the documentation. + +SHOW_USED_FILES = YES + +#--------------------------------------------------------------------------- +# configuration options related to warning and progress messages +#--------------------------------------------------------------------------- + +# The QUIET tag can be used to turn on/off the messages that are generated +# by doxygen. Possible values are YES and NO. If left blank NO is used. + +QUIET = NO + +# The WARNINGS tag can be used to turn on/off the warning messages that are +# generated by doxygen. Possible values are YES and NO. If left blank +# NO is used. + +WARNINGS = YES + +# If WARN_IF_UNDOCUMENTED is set to YES, then doxygen will generate warnings +# for undocumented members. If EXTRACT_ALL is set to YES then this flag will +# automatically be disabled. + +WARN_IF_UNDOCUMENTED = YES + +# If WARN_IF_DOC_ERROR is set to YES, doxygen will generate warnings for +# potential errors in the documentation, such as not documenting some +# parameters in a documented function, or documenting parameters that +# don't exist or using markup commands wrongly. + +WARN_IF_DOC_ERROR = YES + +# The WARN_FORMAT tag determines the format of the warning messages that +# doxygen can produce. The string should contain the $file, $line, and $text +# tags, which will be replaced by the file and line number from which the +# warning originated and the warning text. + +WARN_FORMAT = "$file:$line: $text" + +# The WARN_LOGFILE tag can be used to specify a file to which warning +# and error messages should be written. If left blank the output is written +# to stderr. + +WARN_LOGFILE = + +#--------------------------------------------------------------------------- +# configuration options related to the input files +#--------------------------------------------------------------------------- + +# The INPUT tag can be used to specify the files and/or directories that contain +# documented source files. You may enter file names like "myfile.cpp" or +# directories like "/usr/src/myproject". Separate the files or directories +# with spaces. + +INPUT = + +# If the value of the INPUT tag contains directories, you can use the +# FILE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp +# and *.h) to filter out the source-files in the directories. If left +# blank the following patterns are tested: +# *.c *.cc *.cxx *.cpp *.c++ *.java *.ii *.ixx *.ipp *.i++ *.inl *.h *.hh *.hxx *.hpp +# *.h++ *.idl *.odl *.cs *.php *.php3 *.inc *.m *.mm + +FILE_PATTERNS = + +# The RECURSIVE tag can be used to turn specify whether or not subdirectories +# should be searched for input files as well. Possible values are YES and NO. +# If left blank NO is used. + +RECURSIVE = NO + +# The EXCLUDE tag can be used to specify files and/or directories that should +# excluded from the INPUT source files. This way you can easily exclude a +# subdirectory from a directory tree whose root is specified with the INPUT tag. + +EXCLUDE = + +# The EXCLUDE_SYMLINKS tag can be used select whether or not files or directories +# that are symbolic links (a Unix filesystem feature) are excluded from the input. + +EXCLUDE_SYMLINKS = NO + +# If the value of the INPUT tag contains directories, you can use the +# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude +# certain files from those directories. + +EXCLUDE_PATTERNS = + +# The EXAMPLE_PATH tag can be used to specify one or more files or +# directories that contain example code fragments that are included (see +# the \include command). + +EXAMPLE_PATH = + +# If the value of the EXAMPLE_PATH tag contains directories, you can use the +# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp +# and *.h) to filter out the source-files in the directories. If left +# blank all files are included. + +EXAMPLE_PATTERNS = + +# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be +# searched for input files to be used with the \include or \dontinclude +# commands irrespective of the value of the RECURSIVE tag. +# Possible values are YES and NO. If left blank NO is used. + +EXAMPLE_RECURSIVE = NO + +# The IMAGE_PATH tag can be used to specify one or more files or +# directories that contain image that are included in the documentation (see +# the \image command). + +IMAGE_PATH = + +# The INPUT_FILTER tag can be used to specify a program that doxygen should +# invoke to filter for each input file. Doxygen will invoke the filter program +# by executing (via popen()) the command , where +# is the value of the INPUT_FILTER tag, and is the name of an +# input file. Doxygen will then use the output that the filter program writes +# to standard output. If FILTER_PATTERNS is specified, this tag will be +# ignored. + +INPUT_FILTER = + +# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern +# basis. Doxygen will compare the file name with each pattern and apply the +# filter if there is a match. The filters are a list of the form: +# pattern=filter (like *.cpp=my_cpp_filter). See INPUT_FILTER for further +# info on how filters are used. If FILTER_PATTERNS is empty, INPUT_FILTER +# is applied to all files. + +FILTER_PATTERNS = + +# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using +# INPUT_FILTER) will be used to filter the input files when producing source +# files to browse (i.e. when SOURCE_BROWSER is set to YES). + +FILTER_SOURCE_FILES = NO + +#--------------------------------------------------------------------------- +# configuration options related to source browsing +#--------------------------------------------------------------------------- + +# If the SOURCE_BROWSER tag is set to YES then a list of source files will +# be generated. Documented entities will be cross-referenced with these sources. +# Note: To get rid of all source code in the generated output, make sure also +# VERBATIM_HEADERS is set to NO. + +SOURCE_BROWSER = NO + +# Setting the INLINE_SOURCES tag to YES will include the body +# of functions and classes directly in the documentation. + +INLINE_SOURCES = NO + +# Setting the STRIP_CODE_COMMENTS tag to YES (the default) will instruct +# doxygen to hide any special comment blocks from generated source code +# fragments. Normal C and C++ comments will always remain visible. + +STRIP_CODE_COMMENTS = YES + +# If the REFERENCED_BY_RELATION tag is set to YES (the default) +# then for each documented function all documented +# functions referencing it will be listed. + +REFERENCED_BY_RELATION = YES + +# If the REFERENCES_RELATION tag is set to YES (the default) +# then for each documented function all documented entities +# called/used by that function will be listed. + +REFERENCES_RELATION = YES + +# If the VERBATIM_HEADERS tag is set to YES (the default) then Doxygen +# will generate a verbatim copy of the header file for each class for +# which an include is specified. Set to NO to disable this. + +VERBATIM_HEADERS = YES + +#--------------------------------------------------------------------------- +# configuration options related to the alphabetical class index +#--------------------------------------------------------------------------- + +# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index +# of all compounds will be generated. Enable this if the project +# contains a lot of classes, structs, unions or interfaces. + +ALPHABETICAL_INDEX = NO + +# If the alphabetical index is enabled (see ALPHABETICAL_INDEX) then +# the COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns +# in which this list will be split (can be a number in the range [1..20]) + +COLS_IN_ALPHA_INDEX = 5 + +# In case all classes in a project start with a common prefix, all +# classes will be put under the same header in the alphabetical index. +# The IGNORE_PREFIX tag can be used to specify one or more prefixes that +# should be ignored while generating the index headers. + +IGNORE_PREFIX = + +#--------------------------------------------------------------------------- +# configuration options related to the HTML output +#--------------------------------------------------------------------------- + +# If the GENERATE_HTML tag is set to YES (the default) Doxygen will +# generate HTML output. + +GENERATE_HTML = YES + +# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. +# If a relative path is entered the value of OUTPUT_DIRECTORY will be +# put in front of it. If left blank `html' will be used as the default path. + +HTML_OUTPUT = html + +# The HTML_FILE_EXTENSION tag can be used to specify the file extension for +# each generated HTML page (for example: .htm,.php,.asp). If it is left blank +# doxygen will generate files with .html extension. + +HTML_FILE_EXTENSION = .html + +# The HTML_HEADER tag can be used to specify a personal HTML header for +# each generated HTML page. If it is left blank doxygen will generate a +# standard header. + +HTML_HEADER = + +# The HTML_FOOTER tag can be used to specify a personal HTML footer for +# each generated HTML page. If it is left blank doxygen will generate a +# standard footer. + +HTML_FOOTER = + +# The HTML_STYLESHEET tag can be used to specify a user-defined cascading +# style sheet that is used by each HTML page. It can be used to +# fine-tune the look of the HTML output. If the tag is left blank doxygen +# will generate a default style sheet. Note that doxygen will try to copy +# the style sheet file to the HTML output directory, so don't put your own +# stylesheet in the HTML output directory as well, or it will be erased! + +HTML_STYLESHEET = + +# If the HTML_ALIGN_MEMBERS tag is set to YES, the members of classes, +# files or namespaces will be aligned in HTML using tables. If set to +# NO a bullet list will be used. + +HTML_ALIGN_MEMBERS = YES + +# If the GENERATE_HTMLHELP tag is set to YES, additional index files +# will be generated that can be used as input for tools like the +# Microsoft HTML help workshop to generate a compressed HTML help file (.chm) +# of the generated HTML documentation. + +GENERATE_HTMLHELP = NO + +# If the GENERATE_HTMLHELP tag is set to YES, the CHM_FILE tag can +# be used to specify the file name of the resulting .chm file. You +# can add a path in front of the file if the result should not be +# written to the html output directory. + +CHM_FILE = + +# If the GENERATE_HTMLHELP tag is set to YES, the HHC_LOCATION tag can +# be used to specify the location (absolute path including file name) of +# the HTML help compiler (hhc.exe). If non-empty doxygen will try to run +# the HTML help compiler on the generated index.hhp. + +HHC_LOCATION = + +# If the GENERATE_HTMLHELP tag is set to YES, the GENERATE_CHI flag +# controls if a separate .chi index file is generated (YES) or that +# it should be included in the master .chm file (NO). + +GENERATE_CHI = NO + +# If the GENERATE_HTMLHELP tag is set to YES, the BINARY_TOC flag +# controls whether a binary table of contents is generated (YES) or a +# normal table of contents (NO) in the .chm file. + +BINARY_TOC = NO + +# The TOC_EXPAND flag can be set to YES to add extra items for group members +# to the contents of the HTML help documentation and to the tree view. + +TOC_EXPAND = NO + +# The DISABLE_INDEX tag can be used to turn on/off the condensed index at +# top of each HTML page. The value NO (the default) enables the index and +# the value YES disables it. + +DISABLE_INDEX = NO + +# This tag can be used to set the number of enum values (range [1..20]) +# that doxygen will group on one line in the generated HTML documentation. + +ENUM_VALUES_PER_LINE = 4 + +# If the GENERATE_TREEVIEW tag is set to YES, a side panel will be +# generated containing a tree-like index structure (just like the one that +# is generated for HTML Help). For this to work a browser that supports +# JavaScript, DHTML, CSS and frames is required (for instance Mozilla 1.0+, +# Netscape 6.0+, Internet explorer 5.0+, or Konqueror). Windows users are +# probably better off using the HTML help feature. + +GENERATE_TREEVIEW = NO + +# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be +# used to set the initial width (in pixels) of the frame in which the tree +# is shown. + +TREEVIEW_WIDTH = 250 + +#--------------------------------------------------------------------------- +# configuration options related to the LaTeX output +#--------------------------------------------------------------------------- + +# If the GENERATE_LATEX tag is set to YES (the default) Doxygen will +# generate Latex output. + +GENERATE_LATEX = YES + +# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. +# If a relative path is entered the value of OUTPUT_DIRECTORY will be +# put in front of it. If left blank `latex' will be used as the default path. + +LATEX_OUTPUT = latex + +# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be +# invoked. If left blank `latex' will be used as the default command name. + +LATEX_CMD_NAME = latex + +# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to +# generate index for LaTeX. If left blank `makeindex' will be used as the +# default command name. + +MAKEINDEX_CMD_NAME = makeindex + +# If the COMPACT_LATEX tag is set to YES Doxygen generates more compact +# LaTeX documents. This may be useful for small projects and may help to +# save some trees in general. + +COMPACT_LATEX = NO + +# The PAPER_TYPE tag can be used to set the paper type that is used +# by the printer. Possible values are: a4, a4wide, letter, legal and +# executive. If left blank a4wide will be used. + +PAPER_TYPE = a4wide + +# The EXTRA_PACKAGES tag can be to specify one or more names of LaTeX +# packages that should be included in the LaTeX output. + +EXTRA_PACKAGES = + +# The LATEX_HEADER tag can be used to specify a personal LaTeX header for +# the generated latex document. The header should contain everything until +# the first chapter. If it is left blank doxygen will generate a +# standard header. Notice: only use this tag if you know what you are doing! + +LATEX_HEADER = + +# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated +# is prepared for conversion to pdf (using ps2pdf). The pdf file will +# contain links (just like the HTML output) instead of page references +# This makes the output suitable for online browsing using a pdf viewer. + +PDF_HYPERLINKS = NO + +# If the USE_PDFLATEX tag is set to YES, pdflatex will be used instead of +# plain latex in the generated Makefile. Set this option to YES to get a +# higher quality PDF documentation. + +USE_PDFLATEX = NO + +# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \\batchmode. +# command to the generated LaTeX files. This will instruct LaTeX to keep +# running if errors occur, instead of asking the user for help. +# This option is also used when generating formulas in HTML. + +LATEX_BATCHMODE = NO + +# If LATEX_HIDE_INDICES is set to YES then doxygen will not +# include the index chapters (such as File Index, Compound Index, etc.) +# in the output. + +LATEX_HIDE_INDICES = NO + +#--------------------------------------------------------------------------- +# configuration options related to the RTF output +#--------------------------------------------------------------------------- + +# If the GENERATE_RTF tag is set to YES Doxygen will generate RTF output +# The RTF output is optimized for Word 97 and may not look very pretty with +# other RTF readers or editors. + +GENERATE_RTF = NO + +# The RTF_OUTPUT tag is used to specify where the RTF docs will be put. +# If a relative path is entered the value of OUTPUT_DIRECTORY will be +# put in front of it. If left blank `rtf' will be used as the default path. + +RTF_OUTPUT = rtf + +# If the COMPACT_RTF tag is set to YES Doxygen generates more compact +# RTF documents. This may be useful for small projects and may help to +# save some trees in general. + +COMPACT_RTF = NO + +# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated +# will contain hyperlink fields. The RTF file will +# contain links (just like the HTML output) instead of page references. +# This makes the output suitable for online browsing using WORD or other +# programs which support those fields. +# Note: wordpad (write) and others do not support links. + +RTF_HYPERLINKS = NO + +# Load stylesheet definitions from file. Syntax is similar to doxygen's +# config file, i.e. a series of assignments. You only have to provide +# replacements, missing definitions are set to their default value. + +RTF_STYLESHEET_FILE = + +# Set optional variables used in the generation of an rtf document. +# Syntax is similar to doxygen's config file. + +RTF_EXTENSIONS_FILE = + +#--------------------------------------------------------------------------- +# configuration options related to the man page output +#--------------------------------------------------------------------------- + +# If the GENERATE_MAN tag is set to YES (the default) Doxygen will +# generate man pages + +GENERATE_MAN = YES + +# The MAN_OUTPUT tag is used to specify where the man pages will be put. +# If a relative path is entered the value of OUTPUT_DIRECTORY will be +# put in front of it. If left blank `man' will be used as the default path. + +MAN_OUTPUT = man + +# The MAN_EXTENSION tag determines the extension that is added to +# the generated man pages (default is the subroutine's section .3) + +MAN_EXTENSION = .3 + +# If the MAN_LINKS tag is set to YES and Doxygen generates man output, +# then it will generate one additional man file for each entity +# documented in the real man page(s). These additional files +# only source the real man page, but without them the man command +# would be unable to find the correct page. The default is NO. + +MAN_LINKS = NO + +#--------------------------------------------------------------------------- +# configuration options related to the XML output +#--------------------------------------------------------------------------- + +# If the GENERATE_XML tag is set to YES Doxygen will +# generate an XML file that captures the structure of +# the code including all documentation. + +GENERATE_XML = NO + +# The XML_OUTPUT tag is used to specify where the XML pages will be put. +# If a relative path is entered the value of OUTPUT_DIRECTORY will be +# put in front of it. If left blank `xml' will be used as the default path. + +XML_OUTPUT = xml + +# The XML_SCHEMA tag can be used to specify an XML schema, +# which can be used by a validating XML parser to check the +# syntax of the XML files. + +XML_SCHEMA = + +# The XML_DTD tag can be used to specify an XML DTD, +# which can be used by a validating XML parser to check the +# syntax of the XML files. + +XML_DTD = + +# If the XML_PROGRAMLISTING tag is set to YES Doxygen will +# dump the program listings (including syntax highlighting +# and cross-referencing information) to the XML output. Note that +# enabling this will significantly increase the size of the XML output. + +XML_PROGRAMLISTING = YES + +#--------------------------------------------------------------------------- +# configuration options for the AutoGen Definitions output +#--------------------------------------------------------------------------- + +# If the GENERATE_AUTOGEN_DEF tag is set to YES Doxygen will +# generate an AutoGen Definitions (see autogen.sf.net) file +# that captures the structure of the code including all +# documentation. Note that this feature is still experimental +# and incomplete at the moment. + +GENERATE_AUTOGEN_DEF = NO + +#--------------------------------------------------------------------------- +# configuration options related to the Perl module output +#--------------------------------------------------------------------------- + +# If the GENERATE_PERLMOD tag is set to YES Doxygen will +# generate a Perl module file that captures the structure of +# the code including all documentation. Note that this +# feature is still experimental and incomplete at the +# moment. + +GENERATE_PERLMOD = NO + +# If the PERLMOD_LATEX tag is set to YES Doxygen will generate +# the necessary Makefile rules, Perl scripts and LaTeX code to be able +# to generate PDF and DVI output from the Perl module output. + +PERLMOD_LATEX = NO + +# If the PERLMOD_PRETTY tag is set to YES the Perl module output will be +# nicely formatted so it can be parsed by a human reader. This is useful +# if you want to understand what is going on. On the other hand, if this +# tag is set to NO the size of the Perl module output will be much smaller +# and Perl will parse it just the same. + +PERLMOD_PRETTY = YES + +# The names of the make variables in the generated doxyrules.make file +# are prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. +# This is useful so different doxyrules.make files included by the same +# Makefile don't overwrite each other's variables. + +PERLMOD_MAKEVAR_PREFIX = + +#--------------------------------------------------------------------------- +# Configuration options related to the preprocessor +#--------------------------------------------------------------------------- + +# If the ENABLE_PREPROCESSING tag is set to YES (the default) Doxygen will +# evaluate all C-preprocessor directives found in the sources and include +# files. + +ENABLE_PREPROCESSING = YES + +# If the MACRO_EXPANSION tag is set to YES Doxygen will expand all macro +# names in the source code. If set to NO (the default) only conditional +# compilation will be performed. Macro expansion can be done in a controlled +# way by setting EXPAND_ONLY_PREDEF to YES. + +MACRO_EXPANSION = NO + +# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES +# then the macro expansion is limited to the macros specified with the +# PREDEFINED and EXPAND_AS_PREDEFINED tags. + +EXPAND_ONLY_PREDEF = NO + +# If the SEARCH_INCLUDES tag is set to YES (the default) the includes files +# in the INCLUDE_PATH (see below) will be search if a #include is found. + +SEARCH_INCLUDES = YES + +# The INCLUDE_PATH tag can be used to specify one or more directories that +# contain include files that are not input files but should be processed by +# the preprocessor. + +INCLUDE_PATH = + +# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard +# patterns (like *.h and *.hpp) to filter out the header-files in the +# directories. If left blank, the patterns specified with FILE_PATTERNS will +# be used. + +INCLUDE_FILE_PATTERNS = + +# The PREDEFINED tag can be used to specify one or more macro names that +# are defined before the preprocessor is started (similar to the -D option of +# gcc). The argument of the tag is a list of macros of the form: name +# or name=definition (no spaces). If the definition and the = are +# omitted =1 is assumed. + +PREDEFINED = + +# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then +# this tag can be used to specify a list of macro names that should be expanded. +# The macro definition that is found in the sources will be used. +# Use the PREDEFINED tag if you want to use a different macro definition. + +EXPAND_AS_DEFINED = + +# If the SKIP_FUNCTION_MACROS tag is set to YES (the default) then +# doxygen's preprocessor will remove all function-like macros that are alone +# on a line, have an all uppercase name, and do not end with a semicolon. Such +# function macros are typically used for boiler-plate code, and will confuse the +# parser if not removed. + +SKIP_FUNCTION_MACROS = YES + +#--------------------------------------------------------------------------- +# Configuration::additions related to external references +#--------------------------------------------------------------------------- + +# The TAGFILES option can be used to specify one or more tagfiles. +# Optionally an initial location of the external documentation +# can be added for each tagfile. The format of a tag file without +# this location is as follows: +# TAGFILES = file1 file2 ... +# Adding location for the tag files is done as follows: +# TAGFILES = file1=loc1 "file2 = loc2" ... +# where "loc1" and "loc2" can be relative or absolute paths or +# URLs. If a location is present for each tag, the installdox tool +# does not have to be run to correct the links. +# Note that each tag file must have a unique name +# (where the name does NOT include the path) +# If a tag file is not located in the directory in which doxygen +# is run, you must also specify the path to the tagfile here. + +TAGFILES = + +# When a file name is specified after GENERATE_TAGFILE, doxygen will create +# a tag file that is based on the input files it reads. + +GENERATE_TAGFILE = + +# If the ALLEXTERNALS tag is set to YES all external classes will be listed +# in the class index. If set to NO only the inherited external classes +# will be listed. + +ALLEXTERNALS = NO + +# If the EXTERNAL_GROUPS tag is set to YES all external groups will be listed +# in the modules index. If set to NO, only the current project's groups will +# be listed. + +EXTERNAL_GROUPS = YES + +# The PERL_PATH should be the absolute path and name of the perl script +# interpreter (i.e. the result of `which perl'). + +PERL_PATH = /usr/bin/perl + +#--------------------------------------------------------------------------- +# Configuration options related to the dot tool +#--------------------------------------------------------------------------- + +# If the CLASS_DIAGRAMS tag is set to YES (the default) Doxygen will +# generate a inheritance diagram (in HTML, RTF and LaTeX) for classes with base or +# super classes. Setting the tag to NO turns the diagrams off. Note that this +# option is superseded by the HAVE_DOT option below. This is only a fallback. It is +# recommended to install and use dot, since it yields more powerful graphs. + +CLASS_DIAGRAMS = YES + +# If set to YES, the inheritance and collaboration graphs will hide +# inheritance and usage relations if the target is undocumented +# or is not a class. + +HIDE_UNDOC_RELATIONS = YES + +# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is +# available from the path. This tool is part of Graphviz, a graph visualization +# toolkit from AT&T and Lucent Bell Labs. The other options in this section +# have no effect if this option is set to NO (the default) + +HAVE_DOT = NO + +# If the CLASS_GRAPH and HAVE_DOT tags are set to YES then doxygen +# will generate a graph for each documented class showing the direct and +# indirect inheritance relations. Setting this tag to YES will force the +# the CLASS_DIAGRAMS tag to NO. + +CLASS_GRAPH = YES + +# If the COLLABORATION_GRAPH and HAVE_DOT tags are set to YES then doxygen +# will generate a graph for each documented class showing the direct and +# indirect implementation dependencies (inheritance, containment, and +# class references variables) of the class with other documented classes. + +COLLABORATION_GRAPH = YES + +# If the UML_LOOK tag is set to YES doxygen will generate inheritance and +# collaboration diagrams in a style similar to the OMG's Unified Modeling +# Language. + +UML_LOOK = NO + +# If set to YES, the inheritance and collaboration graphs will show the +# relations between templates and their instances. + +TEMPLATE_RELATIONS = NO + +# If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDE_GRAPH, and HAVE_DOT +# tags are set to YES then doxygen will generate a graph for each documented +# file showing the direct and indirect include dependencies of the file with +# other documented files. + +INCLUDE_GRAPH = YES + +# If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDED_BY_GRAPH, and +# HAVE_DOT tags are set to YES then doxygen will generate a graph for each +# documented header file showing the documented files that directly or +# indirectly include this file. + +INCLUDED_BY_GRAPH = YES + +# If the CALL_GRAPH and HAVE_DOT tags are set to YES then doxygen will +# generate a call dependency graph for every global function or class method. +# Note that enabling this option will significantly increase the time of a run. +# So in most cases it will be better to enable call graphs for selected +# functions only using the \callgraph command. + +CALL_GRAPH = NO + +# If the GRAPHICAL_HIERARCHY and HAVE_DOT tags are set to YES then doxygen +# will graphical hierarchy of all classes instead of a textual one. + +GRAPHICAL_HIERARCHY = YES + +# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images +# generated by dot. Possible values are png, jpg, or gif +# If left blank png will be used. + +DOT_IMAGE_FORMAT = png + +# The tag DOT_PATH can be used to specify the path where the dot tool can be +# found. If left blank, it is assumed the dot tool can be found on the path. + +DOT_PATH = + +# The DOTFILE_DIRS tag can be used to specify one or more directories that +# contain dot files that are included in the documentation (see the +# \dotfile command). + +DOTFILE_DIRS = + +# The MAX_DOT_GRAPH_WIDTH tag can be used to set the maximum allowed width +# (in pixels) of the graphs generated by dot. If a graph becomes larger than +# this value, doxygen will try to truncate the graph, so that it fits within +# the specified constraint. Beware that most browsers cannot cope with very +# large images. + +MAX_DOT_GRAPH_WIDTH = 1024 + +# The MAX_DOT_GRAPH_HEIGHT tag can be used to set the maximum allows height +# (in pixels) of the graphs generated by dot. If a graph becomes larger than +# this value, doxygen will try to truncate the graph, so that it fits within +# the specified constraint. Beware that most browsers cannot cope with very +# large images. + +MAX_DOT_GRAPH_HEIGHT = 1024 + +# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the +# graphs generated by dot. A depth value of 3 means that only nodes reachable +# from the root by following a path via at most 3 edges will be shown. Nodes that +# lay further from the root node will be omitted. Note that setting this option to +# 1 or 2 may greatly reduce the computation time needed for large code bases. Also +# note that a graph may be further truncated if the graph's image dimensions are +# not sufficient to fit the graph (see MAX_DOT_GRAPH_WIDTH and MAX_DOT_GRAPH_HEIGHT). +# If 0 is used for the depth value (the default), the graph is not depth-constrained. + +MAX_DOT_GRAPH_DEPTH = 0 + +# If the GENERATE_LEGEND tag is set to YES (the default) Doxygen will +# generate a legend page explaining the meaning of the various boxes and +# arrows in the dot generated graphs. + +GENERATE_LEGEND = YES + +# If the DOT_CLEANUP tag is set to YES (the default) Doxygen will +# remove the intermediate dot files that are used to generate +# the various graphs. + +DOT_CLEANUP = YES + +#--------------------------------------------------------------------------- +# Configuration::additions related to the search engine +#--------------------------------------------------------------------------- + +# The SEARCHENGINE tag specifies whether or not a search engine should be +# used. If set to NO the values of all tags below this one will be ignored. + +SEARCHENGINE = NO diff --git a/EXAMPLES.t2t b/EXAMPLES.t2t new file mode 100644 index 0000000..cddb03a --- /dev/null +++ b/EXAMPLES.t2t @@ -0,0 +1,152 @@ +CMPH - Examples + + +%!includeconf: CONFIG.t2t + +Using cmph is quite simple. Take a look in the following examples. + +------------------------------------------------------------------- + +``` +#include +#include +// Create minimal perfect hash function from in-memory vector +int main(int argc, char **argv) +{ + + // Creating a filled vector + unsigned int i = 0; + const char *vector[] = {"aaaaaaaaaa", "bbbbbbbbbb", "cccccccccc", "dddddddddd", "eeeeeeeeee", + "ffffffffff", "gggggggggg", "hhhhhhhhhh", "iiiiiiiiii", "jjjjjjjjjj"}; + unsigned int nkeys = 10; + FILE* mphf_fd = fopen("temp.mph", "w"); + // Source of keys + cmph_io_adapter_t *source = cmph_io_vector_adapter((char **)vector, nkeys); + + //Create minimal perfect hash function using the brz algorithm. + cmph_config_t *config = cmph_config_new(source); + cmph_config_set_algo(config, CMPH_BRZ); + cmph_config_set_mphf_fd(config, mphf_fd); + cmph_t *hash = cmph_new(config); + cmph_config_destroy(config); + cmph_dump(hash, mphf_fd); + cmph_destroy(hash); + fclose(mphf_fd); + + //Find key + mphf_fd = fopen("temp.mph", "r"); + hash = cmph_load(mphf_fd); + while (i < nkeys) { + const char *key = vector[i]; + unsigned int id = cmph_search(hash, key, (cmph_uint32)strlen(key)); + fprintf(stderr, "key:%s -- hash:%u\n", key, id); + i++; + } + + //Destroy hash + cmph_destroy(hash); + cmph_io_vector_adapter_destroy(source); + fclose(mphf_fd); + return 0; +} +``` +Download [vector_adapter_ex1.c examples/vector_adapter_ex1.c]. This example does not work in versions below 0.6. +------------------------------- + +``` +#include +#include +// Create minimal perfect hash function from in-memory vector + +#pragma pack(1) +typedef struct { + cmph_uint32 id; + char key[11]; + cmph_uint32 year; +} rec_t; +#pragma pack(0) + +int main(int argc, char **argv) +{ + // Creating a filled vector + unsigned int i = 0; + rec_t vector[10] = {{1, "aaaaaaaaaa", 1999}, {2, "bbbbbbbbbb", 2000}, {3, "cccccccccc", 2001}, + {4, "dddddddddd", 2002}, {5, "eeeeeeeeee", 2003}, {6, "ffffffffff", 2004}, + {7, "gggggggggg", 2005}, {8, "hhhhhhhhhh", 2006}, {9, "iiiiiiiiii", 2007}, + {10,"jjjjjjjjjj", 2008}}; + unsigned int nkeys = 10; + FILE* mphf_fd = fopen("temp_struct_vector.mph", "w"); + // Source of keys + cmph_io_adapter_t *source = cmph_io_struct_vector_adapter(vector, (cmph_uint32)sizeof(rec_t), (cmph_uint32)sizeof(cmph_uint32), 11, nkeys); + + //Create minimal perfect hash function using the BDZ algorithm. + cmph_config_t *config = cmph_config_new(source); + cmph_config_set_algo(config, CMPH_BDZ); + cmph_config_set_mphf_fd(config, mphf_fd); + cmph_t *hash = cmph_new(config); + cmph_config_destroy(config); + cmph_dump(hash, mphf_fd); + cmph_destroy(hash); + fclose(mphf_fd); + + //Find key + mphf_fd = fopen("temp_struct_vector.mph", "r"); + hash = cmph_load(mphf_fd); + while (i < nkeys) { + const char *key = vector[i].key; + unsigned int id = cmph_search(hash, key, 11); + fprintf(stderr, "key:%s -- hash:%u\n", key, id); + i++; + } + + //Destroy hash + cmph_destroy(hash); + cmph_io_vector_adapter_destroy(source); + fclose(mphf_fd); + return 0; +} +``` +Download [struct_vector_adapter_ex3.c examples/struct_vector_adapter_ex3.c]. This example does not work in versions below 0.8. +------------------------------- + +``` +#include +#include +#include + // Create minimal perfect hash function from in-disk keys using BDZ algorithm +int main(int argc, char **argv) +{ + //Open file with newline separated list of keys + FILE * keys_fd = fopen("keys.txt", "r"); + cmph_t *hash = NULL; + if (keys_fd == NULL) + { + fprintf(stderr, "File \"keys.txt\" not found\n"); + exit(1); + } + // Source of keys + cmph_io_adapter_t *source = cmph_io_nlfile_adapter(keys_fd); + + cmph_config_t *config = cmph_config_new(source); + cmph_config_set_algo(config, CMPH_BDZ); + hash = cmph_new(config); + cmph_config_destroy(config); + + //Find key + const char *key = "jjjjjjjjjj"; + unsigned int id = cmph_search(hash, key, (cmph_uint32)strlen(key)); + fprintf(stderr, "Id:%u\n", id); + //Destroy hash + cmph_destroy(hash); + cmph_io_nlfile_adapter_destroy(source); + fclose(keys_fd); + return 0; +} +``` +Download [file_adapter_ex2.c examples/file_adapter_ex2.c] and [keys.txt examples/keys.txt]. This example does not work in versions below 0.8. + +%!include: ALGORITHMS.t2t + +%!include: FOOTER.t2t + +%!include(html): ''GOOGLEANALYTICS.t2t'' \ No newline at end of file diff --git a/FAQ.t2t b/FAQ.t2t new file mode 100644 index 0000000..7807bc6 --- /dev/null +++ b/FAQ.t2t @@ -0,0 +1,38 @@ +CMPH FAQ + + +%!includeconf: CONFIG.t2t + +- How do I define the ids of the keys? + - You don't. The ids will be assigned by the algorithm creating the minimal + perfect hash function. If the algorithm creates an **ordered** minimal + perfect hash function, the ids will be the indices of the keys in the + input. Otherwise, you have no guarantee of the distribution of the ids. + +- Why do I always get the error "Unable to create minimum perfect hashing function"? + - The algorithms do not guarantee that a minimal perfect hash function can + be created. In practice, it will always work if your input + is big enough (>100 keys). + The error is probably because you have duplicated + keys in the input. You must guarantee that the keys are unique in the + input. If you are using a UN*X based OS, try doing +``` #sort input.txt | uniq > input_uniq.txt + and run cmph with input_uniq.txt + +- Why do I change the hash function using cmph_config_set_hashfuncs function and the default (jenkins) +one is executed? + - Probably you are you using the cmph_config_set_algo function after + the cmph_config_set_hashfuncs. Therefore, the default hash function + is reset when you call the cmph_config_set_algo function. + +- What do I do when the following error is got? + - Error: **error while loading shared libraries: libcmph.so.0: cannot open shared object file: No such file ordirectory** + + - Solution: type **export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/** at the shell or put that shell command + in your .profile file or in the /etc/profile file. + +%!include: ALGORITHMS.t2t + +%!include: FOOTER.t2t + +%!include(html): ''GOOGLEANALYTICS.t2t'' \ No newline at end of file diff --git a/FCH.t2t b/FCH.t2t new file mode 100644 index 0000000..872e040 --- /dev/null +++ b/FCH.t2t @@ -0,0 +1,47 @@ +FCH Algorithm + + +%!includeconf: CONFIG.t2t + +---------------------------------------- + +==The Algorithm== +The algorithm is presented in [[1 #papers]]. +---------------------------------------- + +==Memory Consumption== + +Now we detail the memory consumption to generate and to store minimal perfect hash functions +using the FCH algorithm. The structures responsible for memory consumption are in the +following: +- A vector containing all the //n// keys. +- Data structure to speed up the searching step: + + **random_table**: is a vector used to remember currently empty slots in the hash table. It stores //n// 4 byte long integer numbers. This vector initially contains a random permutation of the //n// hash addresses. A pointer called filled_count is used to keep the invariant that any slots to the right side of filled_count (inclusive) are empty and any ones to the left are filled. + + **hash_table**: Table used to check whether all the collisions were resolved. It has //n// entries of one byte. + + **map_table**: For any unfilled slot //x// in hash_table, the map_table vector contains //n// 4 byte long pointers pointing at random_table such that random_table[map_table[x]] = x. Thus, given an empty slot x in the hash_table, we can locate its position in the random_table vector through map_table. + +- Other auxiliary structures + + **sorted_indexes**: is a vector of //cn/(log(n) + 1)// 4 byte long pointers to indirectly keep the buckets sorted by decreasing order of their sizes. + + + **function //g//**: is represented by a vector of //cn/(log(n) + 1)// 4 byte long integer numbers, one for each bucket. It is used to spread all the keys in a given bucket into the hash table without collisions. + + +Thus, the total memory consumption of FCH algorithm for generating a minimal +perfect hash function (MPHF) is: //O(n) + 9n + 8cn/(log(n) + 1)// bytes. +The value of parameter //c// must be greater than or equal to 2.6. + +Now we present the memory consumption to store the resulting function. +We only need to store the //g// function and a constant number of bytes for the seed of the hash functions used in the resulting MPHF. Thus, we need //cn/(log(n) + 1) + O(1)// bytes. + +---------------------------------------- + +==Papers==[papers] + ++ E.A. Fox, Q.F. Chen, and L.S. Heath. [A faster algorithm for constructing minimal perfect hash functions. papers/fch92.pdf] In Proc. 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 266-273, 1992. + + +%!include: ALGORITHMS.t2t + +%!include: FOOTER.t2t + +%!include(html): ''GOOGLEANALYTICS.t2t'' \ No newline at end of file diff --git a/FOOTER.t2t b/FOOTER.t2t new file mode 100644 index 0000000..47698a0 --- /dev/null +++ b/FOOTER.t2t @@ -0,0 +1,13 @@ + + + +Enjoy! + +[Davi de Castro Reis davi@users.sourceforge.net] + +[Djamel Belazzougui db8192@users.sourceforge.net] + +[Fabiano Cupertino Botelho fc_botelho@users.sourceforge.net] + +[Nivio Ziviani nivio@dcc.ufmg.br] + diff --git a/GOOGLEANALYTICS.t2t b/GOOGLEANALYTICS.t2t new file mode 100644 index 0000000..360af4c --- /dev/null +++ b/GOOGLEANALYTICS.t2t @@ -0,0 +1,9 @@ + + \ No newline at end of file diff --git a/GPERF.t2t b/GPERF.t2t new file mode 100644 index 0000000..b047af6 --- /dev/null +++ b/GPERF.t2t @@ -0,0 +1,39 @@ +GPERF versus CMPH + + +%!includeconf: CONFIG.t2t + +You might ask why cmph if [gperf http://www.gnu.org/software/gperf/gperf.html] +already works perfectly. Actually, gperf and cmph have different goals. +Basically, these are the requirements for each of them: + + +- GPERF + + - Create very fast hash functions for **small** sets + + - Create **perfect** hash functions + +- CMPH + + - Create very fast hash function for **very large** sets + + - Create **minimal perfect** hash functions + +As result, cmph can be used to create hash functions where gperf would run +forever without finding a perfect hash function, because of the running +time of the algorithm and the large memory usage. +On the other side, functions created by cmph are about 2x slower than those +created by gperf. + +So, if you have large sets, or memory usage is a key restriction for you, stick +to cmph. If you have small sets, and do not care about memory usage, go with +gperf. The first problem is common in the information retrieval field (e.g. +assigning ids to millions of documents), while the former is usually found in +the compiler programming area (detect reserved keywords). + +%!include: ALGORITHMS.t2t + +%!include: FOOTER.t2t + +%!include(html): ''GOOGLEANALYTICS.t2t'' \ No newline at end of file diff --git a/LGPL-2 b/LGPL-2 new file mode 100644 index 0000000..74586da --- /dev/null +++ b/LGPL-2 @@ -0,0 +1,513 @@ +Most components of the "acl" package are licensed under +Version 2.1 of the GNU Lesser General Public License (see below). +below. + +Some components (as annotated in the source) are licensed +under Version 2 of the GNU General Public License (see COPYING). + +---------------------------------------------------------------------- + + GNU LESSER GENERAL PUBLIC LICENSE + Version 2.1, February 1999 + + Copyright (C) 1991, 1999 Free Software Foundation, Inc. + 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + +[This is the first released version of the Lesser GPL. It also counts + as the successor of the GNU Library Public License, version 2, hence + the version number 2.1.] + + Preamble + + The licenses for most software are designed to take away your +freedom to share and change it. By contrast, the GNU General Public +Licenses are intended to guarantee your freedom to share and change +free software--to make sure the software is free for all its users. + + This license, the Lesser General Public License, applies to some +specially designated software packages--typically libraries--of the +Free Software Foundation and other authors who decide to use it. You +can use it too, but we suggest you first think carefully about whether +this license or the ordinary General Public License is the better +strategy to use in any particular case, based on the explanations below. + + When we speak of free software, we are referring to freedom of use, +not price. Our General Public Licenses are designed to make sure that +you have the freedom to distribute copies of free software (and charge +for this service if you wish); that you receive source code or can get +it if you want it; that you can change the software and use pieces of +it in new free programs; and that you are informed that you can do +these things. + + To protect your rights, we need to make restrictions that forbid +distributors to deny you these rights or to ask you to surrender these +rights. These restrictions translate to certain responsibilities for +you if you distribute copies of the library or if you modify it. + + For example, if you distribute copies of the library, whether gratis +or for a fee, you must give the recipients all the rights that we gave +you. You must make sure that they, too, receive or can get the source +code. If you link other code with the library, you must provide +complete object files to the recipients, so that they can relink them +with the library after making changes to the library and recompiling +it. And you must show them these terms so they know their rights. + + We protect your rights with a two-step method: (1) we copyright the +library, and (2) we offer you this license, which gives you legal +permission to copy, distribute and/or modify the library. + + To protect each distributor, we want to make it very clear that +there is no warranty for the free library. Also, if the library is +modified by someone else and passed on, the recipients should know +that what they have is not the original version, so that the original +author's reputation will not be affected by problems that might be +introduced by others. + + Finally, software patents pose a constant threat to the existence of +any free program. We wish to make sure that a company cannot +effectively restrict the users of a free program by obtaining a +restrictive license from a patent holder. Therefore, we insist that +any patent license obtained for a version of the library must be +consistent with the full freedom of use specified in this license. + + Most GNU software, including some libraries, is covered by the +ordinary GNU General Public License. This license, the GNU Lesser +General Public License, applies to certain designated libraries, and +is quite different from the ordinary General Public License. We use +this license for certain libraries in order to permit linking those +libraries into non-free programs. + + When a program is linked with a library, whether statically or using +a shared library, the combination of the two is legally speaking a +combined work, a derivative of the original library. The ordinary +General Public License therefore permits such linking only if the +entire combination fits its criteria of freedom. The Lesser General +Public License permits more lax criteria for linking other code with +the library. + + We call this license the "Lesser" General Public License because it +does Less to protect the user's freedom than the ordinary General +Public License. It also provides other free software developers Less +of an advantage over competing non-free programs. These disadvantages +are the reason we use the ordinary General Public License for many +libraries. However, the Lesser license provides advantages in certain +special circumstances. + + For example, on rare occasions, there may be a special need to +encourage the widest possible use of a certain library, so that it becomes +a de-facto standard. To achieve this, non-free programs must be +allowed to use the library. A more frequent case is that a free +library does the same job as widely used non-free libraries. In this +case, there is little to gain by limiting the free library to free +software only, so we use the Lesser General Public License. + + In other cases, permission to use a particular library in non-free +programs enables a greater number of people to use a large body of +free software. For example, permission to use the GNU C Library in +non-free programs enables many more people to use the whole GNU +operating system, as well as its variant, the GNU/Linux operating +system. + + Although the Lesser General Public License is Less protective of the +users' freedom, it does ensure that the user of a program that is +linked with the Library has the freedom and the wherewithal to run +that program using a modified version of the Library. + + The precise terms and conditions for copying, distribution and +modification follow. Pay close attention to the difference between a +"work based on the library" and a "work that uses the library". The +former contains code derived from the library, whereas the latter must +be combined with the library in order to run. + + GNU LESSER GENERAL PUBLIC LICENSE + TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION + + 0. This License Agreement applies to any software library or other +program which contains a notice placed by the copyright holder or +other authorized party saying it may be distributed under the terms of +this Lesser General Public License (also called "this License"). +Each licensee is addressed as "you". + + A "library" means a collection of software functions and/or data +prepared so as to be conveniently linked with application programs +(which use some of those functions and data) to form executables. + + The "Library", below, refers to any such software library or work +which has been distributed under these terms. A "work based on the +Library" means either the Library or any derivative work under +copyright law: that is to say, a work containing the Library or a +portion of it, either verbatim or with modifications and/or translated +straightforwardly into another language. (Hereinafter, translation is +included without limitation in the term "modification".) + + "Source code" for a work means the preferred form of the work for +making modifications to it. For a library, complete source code means +all the source code for all modules it contains, plus any associated +interface definition files, plus the scripts used to control compilation +and installation of the library. + + Activities other than copying, distribution and modification are not +covered by this License; they are outside its scope. The act of +running a program using the Library is not restricted, and output from +such a program is covered only if its contents constitute a work based +on the Library (independent of the use of the Library in a tool for +writing it). Whether that is true depends on what the Library does +and what the program that uses the Library does. + + 1. You may copy and distribute verbatim copies of the Library's +complete source code as you receive it, in any medium, provided that +you conspicuously and appropriately publish on each copy an +appropriate copyright notice and disclaimer of warranty; keep intact +all the notices that refer to this License and to the absence of any +warranty; and distribute a copy of this License along with the +Library. + + You may charge a fee for the physical act of transferring a copy, +and you may at your option offer warranty protection in exchange for a +fee. + + 2. You may modify your copy or copies of the Library or any portion +of it, thus forming a work based on the Library, and copy and +distribute such modifications or work under the terms of Section 1 +above, provided that you also meet all of these conditions: + + a) The modified work must itself be a software library. + + b) You must cause the files modified to carry prominent notices + stating that you changed the files and the date of any change. + + c) You must cause the whole of the work to be licensed at no + charge to all third parties under the terms of this License. + + d) If a facility in the modified Library refers to a function or a + table of data to be supplied by an application program that uses + the facility, other than as an argument passed when the facility + is invoked, then you must make a good faith effort to ensure that, + in the event an application does not supply such function or + table, the facility still operates, and performs whatever part of + its purpose remains meaningful. + + (For example, a function in a library to compute square roots has + a purpose that is entirely well-defined independent of the + application. Therefore, Subsection 2d requires that any + application-supplied function or table used by this function must + be optional: if the application does not supply it, the square + root function must still compute square roots.) + +These requirements apply to the modified work as a whole. If +identifiable sections of that work are not derived from the Library, +and can be reasonably considered independent and separate works in +themselves, then this License, and its terms, do not apply to those +sections when you distribute them as separate works. But when you +distribute the same sections as part of a whole which is a work based +on the Library, the distribution of the whole must be on the terms of +this License, whose permissions for other licensees extend to the +entire whole, and thus to each and every part regardless of who wrote +it. + +Thus, it is not the intent of this section to claim rights or contest +your rights to work written entirely by you; rather, the intent is to +exercise the right to control the distribution of derivative or +collective works based on the Library. + +In addition, mere aggregation of another work not based on the Library +with the Library (or with a work based on the Library) on a volume of +a storage or distribution medium does not bring the other work under +the scope of this License. + + 3. You may opt to apply the terms of the ordinary GNU General Public +License instead of this License to a given copy of the Library. To do +this, you must alter all the notices that refer to this License, so +that they refer to the ordinary GNU General Public License, version 2, +instead of to this License. (If a newer version than version 2 of the +ordinary GNU General Public License has appeared, then you can specify +that version instead if you wish.) Do not make any other change in +these notices. + + Once this change is made in a given copy, it is irreversible for +that copy, so the ordinary GNU General Public License applies to all +subsequent copies and derivative works made from that copy. + + This option is useful when you wish to copy part of the code of +the Library into a program that is not a library. + + 4. You may copy and distribute the Library (or a portion or +derivative of it, under Section 2) in object code or executable form +under the terms of Sections 1 and 2 above provided that you accompany +it with the complete corresponding machine-readable source code, which +must be distributed under the terms of Sections 1 and 2 above on a +medium customarily used for software interchange. + + If distribution of object code is made by offering access to copy +from a designated place, then offering equivalent access to copy the +source code from the same place satisfies the requirement to +distribute the source code, even though third parties are not +compelled to copy the source along with the object code. + + 5. A program that contains no derivative of any portion of the +Library, but is designed to work with the Library by being compiled or +linked with it, is called a "work that uses the Library". Such a +work, in isolation, is not a derivative work of the Library, and +therefore falls outside the scope of this License. + + However, linking a "work that uses the Library" with the Library +creates an executable that is a derivative of the Library (because it +contains portions of the Library), rather than a "work that uses the +library". The executable is therefore covered by this License. +Section 6 states terms for distribution of such executables. + + When a "work that uses the Library" uses material from a header file +that is part of the Library, the object code for the work may be a +derivative work of the Library even though the source code is not. +Whether this is true is especially significant if the work can be +linked without the Library, or if the work is itself a library. The +threshold for this to be true is not precisely defined by law. + + If such an object file uses only numerical parameters, data +structure layouts and accessors, and small macros and small inline +functions (ten lines or less in length), then the use of the object +file is unrestricted, regardless of whether it is legally a derivative +work. (Executables containing this object code plus portions of the +Library will still fall under Section 6.) + + Otherwise, if the work is a derivative of the Library, you may +distribute the object code for the work under the terms of Section 6. +Any executables containing that work also fall under Section 6, +whether or not they are linked directly with the Library itself. + + 6. As an exception to the Sections above, you may also combine or +link a "work that uses the Library" with the Library to produce a +work containing portions of the Library, and distribute that work +under terms of your choice, provided that the terms permit +modification of the work for the customer's own use and reverse +engineering for debugging such modifications. + + You must give prominent notice with each copy of the work that the +Library is used in it and that the Library and its use are covered by +this License. You must supply a copy of this License. If the work +during execution displays copyright notices, you must include the +copyright notice for the Library among them, as well as a reference +directing the user to the copy of this License. Also, you must do one +of these things: + + a) Accompany the work with the complete corresponding + machine-readable source code for the Library including whatever + changes were used in the work (which must be distributed under + Sections 1 and 2 above); and, if the work is an executable linked + with the Library, with the complete machine-readable "work that + uses the Library", as object code and/or source code, so that the + user can modify the Library and then relink to produce a modified + executable containing the modified Library. (It is understood + that the user who changes the contents of definitions files in the + Library will not necessarily be able to recompile the application + to use the modified definitions.) + + b) Use a suitable shared library mechanism for linking with the + Library. A suitable mechanism is one that (1) uses at run time a + copy of the library already present on the user's computer system, + rather than copying library functions into the executable, and (2) + will operate properly with a modified version of the library, if + the user installs one, as long as the modified version is + interface-compatible with the version that the work was made with. + + c) Accompany the work with a written offer, valid for at + least three years, to give the same user the materials + specified in Subsection 6a, above, for a charge no more + than the cost of performing this distribution. + + d) If distribution of the work is made by offering access to copy + from a designated place, offer equivalent access to copy the above + specified materials from the same place. + + e) Verify that the user has already received a copy of these + materials or that you have already sent this user a copy. + + For an executable, the required form of the "work that uses the +Library" must include any data and utility programs needed for +reproducing the executable from it. However, as a special exception, +the materials to be distributed need not include anything that is +normally distributed (in either source or binary form) with the major +components (compiler, kernel, and so on) of the operating system on +which the executable runs, unless that component itself accompanies +the executable. + + It may happen that this requirement contradicts the license +restrictions of other proprietary libraries that do not normally +accompany the operating system. Such a contradiction means you cannot +use both them and the Library together in an executable that you +distribute. + + 7. You may place library facilities that are a work based on the +Library side-by-side in a single library together with other library +facilities not covered by this License, and distribute such a combined +library, provided that the separate distribution of the work based on +the Library and of the other library facilities is otherwise +permitted, and provided that you do these two things: + + a) Accompany the combined library with a copy of the same work + based on the Library, uncombined with any other library + facilities. This must be distributed under the terms of the + Sections above. + + b) Give prominent notice with the combined library of the fact + that part of it is a work based on the Library, and explaining + where to find the accompanying uncombined form of the same work. + + 8. You may not copy, modify, sublicense, link with, or distribute +the Library except as expressly provided under this License. Any +attempt otherwise to copy, modify, sublicense, link with, or +distribute the Library is void, and will automatically terminate your +rights under this License. However, parties who have received copies, +or rights, from you under this License will not have their licenses +terminated so long as such parties remain in full compliance. + + 9. You are not required to accept this License, since you have not +signed it. However, nothing else grants you permission to modify or +distribute the Library or its derivative works. These actions are +prohibited by law if you do not accept this License. Therefore, by +modifying or distributing the Library (or any work based on the +Library), you indicate your acceptance of this License to do so, and +all its terms and conditions for copying, distributing or modifying +the Library or works based on it. + + 10. Each time you redistribute the Library (or any work based on the +Library), the recipient automatically receives a license from the +original licensor to copy, distribute, link with or modify the Library +subject to these terms and conditions. You may not impose any further +restrictions on the recipients' exercise of the rights granted herein. +You are not responsible for enforcing compliance by third parties with +this License. + + 11. If, as a consequence of a court judgment or allegation of patent +infringement or for any other reason (not limited to patent issues), +conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot +distribute so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you +may not distribute the Library at all. For example, if a patent +license would not permit royalty-free redistribution of the Library by +all those who receive copies directly or indirectly through you, then +the only way you could satisfy both it and this License would be to +refrain entirely from distribution of the Library. + +If any portion of this section is held invalid or unenforceable under any +particular circumstance, the balance of the section is intended to apply, +and the section as a whole is intended to apply in other circumstances. + +It is not the purpose of this section to induce you to infringe any +patents or other property right claims or to contest validity of any +such claims; this section has the sole purpose of protecting the +integrity of the free software distribution system which is +implemented by public license practices. Many people have made +generous contributions to the wide range of software distributed +through that system in reliance on consistent application of that +system; it is up to the author/donor to decide if he or she is willing +to distribute software through any other system and a licensee cannot +impose that choice. + +This section is intended to make thoroughly clear what is believed to +be a consequence of the rest of this License. + + 12. If the distribution and/or use of the Library is restricted in +certain countries either by patents or by copyrighted interfaces, the +original copyright holder who places the Library under this License may add +an explicit geographical distribution limitation excluding those countries, +so that distribution is permitted only in or among countries not thus +excluded. In such case, this License incorporates the limitation as if +written in the body of this License. + + 13. The Free Software Foundation may publish revised and/or new +versions of the Lesser General Public License from time to time. +Such new versions will be similar in spirit to the present version, +but may differ in detail to address new problems or concerns. + +Each version is given a distinguishing version number. If the Library +specifies a version number of this License which applies to it and +"any later version", you have the option of following the terms and +conditions either of that version or of any later version published by +the Free Software Foundation. If the Library does not specify a +license version number, you may choose any version ever published by +the Free Software Foundation. + + 14. If you wish to incorporate parts of the Library into other free +programs whose distribution conditions are incompatible with these, +write to the author to ask for permission. For software which is +copyrighted by the Free Software Foundation, write to the Free +Software Foundation; we sometimes make exceptions for this. Our +decision will be guided by the two goals of preserving the free status +of all derivatives of our free software and of promoting the sharing +and reuse of software generally. + + NO WARRANTY + + 15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO +WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. +EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR +OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY +KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE +LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME +THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + + 16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN +WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY +AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU +FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR +CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE +LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING +RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A +FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF +SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH +DAMAGES. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Libraries + + If you develop a new library, and you want it to be of the greatest +possible use to the public, we recommend making it free software that +everyone can redistribute and change. You can do so by permitting +redistribution under these terms (or, alternatively, under the terms of the +ordinary General Public License). + + To apply these terms, attach the following notices to the library. It is +safest to attach them to the start of each source file to most effectively +convey the exclusion of warranty; and each file should have at least the +"copyright" line and a pointer to where the full notice is found. + + + Copyright (C) + + This library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + This library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with this library; if not, write to the Free Software + Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +Also add information on how to contact you by electronic and paper mail. + +You should also get your employer (if you work as a programmer) or your +school, if any, to sign a "copyright disclaimer" for the library, if +necessary. Here is a sample; alter the names: + + Yoyodyne, Inc., hereby disclaims all copyright interest in the + library `Frob' (a library for tweaking knobs) written by James Random Hacker. + + , 1 April 1990 + Ty Coon, President of Vice + +That's all there is to it! + + diff --git a/LOGO.t2t b/LOGO.t2t new file mode 100644 index 0000000..dc245a8 --- /dev/null +++ b/LOGO.t2t @@ -0,0 +1 @@ +SourceForge.net Logo diff --git a/MPL-1.1 b/MPL-1.1 new file mode 100644 index 0000000..06f9651 --- /dev/null +++ b/MPL-1.1 @@ -0,0 +1,469 @@ + MOZILLA PUBLIC LICENSE + Version 1.1 + + --------------- + +1. Definitions. + + 1.0.1. "Commercial Use" means distribution or otherwise making the + Covered Code available to a third party. + + 1.1. "Contributor" means each entity that creates or contributes to + the creation of Modifications. + + 1.2. "Contributor Version" means the combination of the Original + Code, prior Modifications used by a Contributor, and the Modifications + made by that particular Contributor. + + 1.3. "Covered Code" means the Original Code or Modifications or the + combination of the Original Code and Modifications, in each case + including portions thereof. + + 1.4. "Electronic Distribution Mechanism" means a mechanism generally + accepted in the software development community for the electronic + transfer of data. + + 1.5. "Executable" means Covered Code in any form other than Source + Code. + + 1.6. "Initial Developer" means the individual or entity identified + as the Initial Developer in the Source Code notice required by Exhibit + A. + + 1.7. "Larger Work" means a work which combines Covered Code or + portions thereof with code not governed by the terms of this License. + + 1.8. "License" means this document. + + 1.8.1. "Licensable" means having the right to grant, to the maximum + extent possible, whether at the time of the initial grant or + subsequently acquired, any and all of the rights conveyed herein. + + 1.9. "Modifications" means any addition to or deletion from the + substance or structure of either the Original Code or any previous + Modifications. When Covered Code is released as a series of files, a + Modification is: + A. Any addition to or deletion from the contents of a file + containing Original Code or previous Modifications. + + B. Any new file that contains any part of the Original Code or + previous Modifications. + + 1.10. "Original Code" means Source Code of computer software code + which is described in the Source Code notice required by Exhibit A as + Original Code, and which, at the time of its release under this + License is not already Covered Code governed by this License. + + 1.10.1. "Patent Claims" means any patent claim(s), now owned or + hereafter acquired, including without limitation, method, process, + and apparatus claims, in any patent Licensable by grantor. + + 1.11. "Source Code" means the preferred form of the Covered Code for + making modifications to it, including all modules it contains, plus + any associated interface definition files, scripts used to control + compilation and installation of an Executable, or source code + differential comparisons against either the Original Code or another + well known, available Covered Code of the Contributor's choice. The + Source Code can be in a compressed or archival form, provided the + appropriate decompression or de-archiving software is widely available + for no charge. + + 1.12. "You" (or "Your") means an individual or a legal entity + exercising rights under, and complying with all of the terms of, this + License or a future version of this License issued under Section 6.1. + For legal entities, "You" includes any entity which controls, is + controlled by, or is under common control with You. For purposes of + this definition, "control" means (a) the power, direct or indirect, + to cause the direction or management of such entity, whether by + contract or otherwise, or (b) ownership of more than fifty percent + (50%) of the outstanding shares or beneficial ownership of such + entity. + +2. Source Code License. + + 2.1. The Initial Developer Grant. + The Initial Developer hereby grants You a world-wide, royalty-free, + non-exclusive license, subject to third party intellectual property + claims: + (a) under intellectual property rights (other than patent or + trademark) Licensable by Initial Developer to use, reproduce, + modify, display, perform, sublicense and distribute the Original + Code (or portions thereof) with or without Modifications, and/or + as part of a Larger Work; and + + (b) under Patents Claims infringed by the making, using or + selling of Original Code, to make, have made, use, practice, + sell, and offer for sale, and/or otherwise dispose of the + Original Code (or portions thereof). + + (c) the licenses granted in this Section 2.1(a) and (b) are + effective on the date Initial Developer first distributes + Original Code under the terms of this License. + + (d) Notwithstanding Section 2.1(b) above, no patent license is + granted: 1) for code that You delete from the Original Code; 2) + separate from the Original Code; or 3) for infringements caused + by: i) the modification of the Original Code or ii) the + combination of the Original Code with other software or devices. + + 2.2. Contributor Grant. + Subject to third party intellectual property claims, each Contributor + hereby grants You a world-wide, royalty-free, non-exclusive license + + (a) under intellectual property rights (other than patent or + trademark) Licensable by Contributor, to use, reproduce, modify, + display, perform, sublicense and distribute the Modifications + created by such Contributor (or portions thereof) either on an + unmodified basis, with other Modifications, as Covered Code + and/or as part of a Larger Work; and + + (b) under Patent Claims infringed by the making, using, or + selling of Modifications made by that Contributor either alone + and/or in combination with its Contributor Version (or portions + of such combination), to make, use, sell, offer for sale, have + made, and/or otherwise dispose of: 1) Modifications made by that + Contributor (or portions thereof); and 2) the combination of + Modifications made by that Contributor with its Contributor + Version (or portions of such combination). + + (c) the licenses granted in Sections 2.2(a) and 2.2(b) are + effective on the date Contributor first makes Commercial Use of + the Covered Code. + + (d) Notwithstanding Section 2.2(b) above, no patent license is + granted: 1) for any code that Contributor has deleted from the + Contributor Version; 2) separate from the Contributor Version; + 3) for infringements caused by: i) third party modifications of + Contributor Version or ii) the combination of Modifications made + by that Contributor with other software (except as part of the + Contributor Version) or other devices; or 4) under Patent Claims + infringed by Covered Code in the absence of Modifications made by + that Contributor. + +3. Distribution Obligations. + + 3.1. Application of License. + The Modifications which You create or to which You contribute are + governed by the terms of this License, including without limitation + Section 2.2. The Source Code version of Covered Code may be + distributed only under the terms of this License or a future version + of this License released under Section 6.1, and You must include a + copy of this License with every copy of the Source Code You + distribute. You may not offer or impose any terms on any Source Code + version that alters or restricts the applicable version of this + License or the recipients' rights hereunder. However, You may include + an additional document offering the additional rights described in + Section 3.5. + + 3.2. Availability of Source Code. + Any Modification which You create or to which You contribute must be + made available in Source Code form under the terms of this License + either on the same media as an Executable version or via an accepted + Electronic Distribution Mechanism to anyone to whom you made an + Executable version available; and if made available via Electronic + Distribution Mechanism, must remain available for at least twelve (12) + months after the date it initially became available, or at least six + (6) months after a subsequent version of that particular Modification + has been made available to such recipients. You are responsible for + ensuring that the Source Code version remains available even if the + Electronic Distribution Mechanism is maintained by a third party. + + 3.3. Description of Modifications. + You must cause all Covered Code to which You contribute to contain a + file documenting the changes You made to create that Covered Code and + the date of any change. You must include a prominent statement that + the Modification is derived, directly or indirectly, from Original + Code provided by the Initial Developer and including the name of the + Initial Developer in (a) the Source Code, and (b) in any notice in an + Executable version or related documentation in which You describe the + origin or ownership of the Covered Code. + + 3.4. Intellectual Property Matters + (a) Third Party Claims. + If Contributor has knowledge that a license under a third party's + intellectual property rights is required to exercise the rights + granted by such Contributor under Sections 2.1 or 2.2, + Contributor must include a text file with the Source Code + distribution titled "LEGAL" which describes the claim and the + party making the claim in sufficient detail that a recipient will + know whom to contact. If Contributor obtains such knowledge after + the Modification is made available as described in Section 3.2, + Contributor shall promptly modify the LEGAL file in all copies + Contributor makes available thereafter and shall take other steps + (such as notifying appropriate mailing lists or newsgroups) + reasonably calculated to inform those who received the Covered + Code that new knowledge has been obtained. + + (b) Contributor APIs. + If Contributor's Modifications include an application programming + interface and Contributor has knowledge of patent licenses which + are reasonably necessary to implement that API, Contributor must + also include this information in the LEGAL file. + + (c) Representations. + Contributor represents that, except as disclosed pursuant to + Section 3.4(a) above, Contributor believes that Contributor's + Modifications are Contributor's original creation(s) and/or + Contributor has sufficient rights to grant the rights conveyed by + this License. + + 3.5. Required Notices. + You must duplicate the notice in Exhibit A in each file of the Source + Code. If it is not possible to put such notice in a particular Source + Code file due to its structure, then You must include such notice in a + location (such as a relevant directory) where a user would be likely + to look for such a notice. If You created one or more Modification(s) + You may add your name as a Contributor to the notice described in + Exhibit A. You must also duplicate this License in any documentation + for the Source Code where You describe recipients' rights or ownership + rights relating to Covered Code. You may choose to offer, and to + charge a fee for, warranty, support, indemnity or liability + obligations to one or more recipients of Covered Code. However, You + may do so only on Your own behalf, and not on behalf of the Initial + Developer or any Contributor. You must make it absolutely clear than + any such warranty, support, indemnity or liability obligation is + offered by You alone, and You hereby agree to indemnify the Initial + Developer and every Contributor for any liability incurred by the + Initial Developer or such Contributor as a result of warranty, + support, indemnity or liability terms You offer. + + 3.6. Distribution of Executable Versions. + You may distribute Covered Code in Executable form only if the + requirements of Section 3.1-3.5 have been met for that Covered Code, + and if You include a notice stating that the Source Code version of + the Covered Code is available under the terms of this License, + including a description of how and where You have fulfilled the + obligations of Section 3.2. The notice must be conspicuously included + in any notice in an Executable version, related documentation or + collateral in which You describe recipients' rights relating to the + Covered Code. You may distribute the Executable version of Covered + Code or ownership rights under a license of Your choice, which may + contain terms different from this License, provided that You are in + compliance with the terms of this License and that the license for the + Executable version does not attempt to limit or alter the recipient's + rights in the Source Code version from the rights set forth in this + License. If You distribute the Executable version under a different + license You must make it absolutely clear that any terms which differ + from this License are offered by You alone, not by the Initial + Developer or any Contributor. You hereby agree to indemnify the + Initial Developer and every Contributor for any liability incurred by + the Initial Developer or such Contributor as a result of any such + terms You offer. + + 3.7. Larger Works. + You may create a Larger Work by combining Covered Code with other code + not governed by the terms of this License and distribute the Larger + Work as a single product. In such a case, You must make sure the + requirements of this License are fulfilled for the Covered Code. + +4. Inability to Comply Due to Statute or Regulation. + + If it is impossible for You to comply with any of the terms of this + License with respect to some or all of the Covered Code due to + statute, judicial order, or regulation then You must: (a) comply with + the terms of this License to the maximum extent possible; and (b) + describe the limitations and the code they affect. Such description + must be included in the LEGAL file described in Section 3.4 and must + be included with all distributions of the Source Code. Except to the + extent prohibited by statute or regulation, such description must be + sufficiently detailed for a recipient of ordinary skill to be able to + understand it. + +5. Application of this License. + + This License applies to code to which the Initial Developer has + attached the notice in Exhibit A and to related Covered Code. + +6. Versions of the License. + + 6.1. New Versions. + Netscape Communications Corporation ("Netscape") may publish revised + and/or new versions of the License from time to time. Each version + will be given a distinguishing version number. + + 6.2. Effect of New Versions. + Once Covered Code has been published under a particular version of the + License, You may always continue to use it under the terms of that + version. You may also choose to use such Covered Code under the terms + of any subsequent version of the License published by Netscape. No one + other than Netscape has the right to modify the terms applicable to + Covered Code created under this License. + + 6.3. Derivative Works. + If You create or use a modified version of this License (which you may + only do in order to apply it to code which is not already Covered Code + governed by this License), You must (a) rename Your license so that + the phrases "Mozilla", "MOZILLAPL", "MOZPL", "Netscape", + "MPL", "NPL" or any confusingly similar phrase do not appear in your + license (except to note that your license differs from this License) + and (b) otherwise make it clear that Your version of the license + contains terms which differ from the Mozilla Public License and + Netscape Public License. (Filling in the name of the Initial + Developer, Original Code or Contributor in the notice described in + Exhibit A shall not of themselves be deemed to be modifications of + this License.) + +7. DISCLAIMER OF WARRANTY. + + COVERED CODE IS PROVIDED UNDER THIS LICENSE ON AN "AS IS" BASIS, + WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, + WITHOUT LIMITATION, WARRANTIES THAT THE COVERED CODE IS FREE OF + DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE OR NON-INFRINGING. + THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE COVERED CODE + IS WITH YOU. SHOULD ANY COVERED CODE PROVE DEFECTIVE IN ANY RESPECT, + YOU (NOT THE INITIAL DEVELOPER OR ANY OTHER CONTRIBUTOR) ASSUME THE + COST OF ANY NECESSARY SERVICING, REPAIR OR CORRECTION. THIS DISCLAIMER + OF WARRANTY CONSTITUTES AN ESSENTIAL PART OF THIS LICENSE. NO USE OF + ANY COVERED CODE IS AUTHORIZED HEREUNDER EXCEPT UNDER THIS DISCLAIMER. + +8. TERMINATION. + + 8.1. This License and the rights granted hereunder will terminate + automatically if You fail to comply with terms herein and fail to cure + such breach within 30 days of becoming aware of the breach. All + sublicenses to the Covered Code which are properly granted shall + survive any termination of this License. Provisions which, by their + nature, must remain in effect beyond the termination of this License + shall survive. + + 8.2. If You initiate litigation by asserting a patent infringement + claim (excluding declatory judgment actions) against Initial Developer + or a Contributor (the Initial Developer or Contributor against whom + You file such action is referred to as "Participant") alleging that: + + (a) such Participant's Contributor Version directly or indirectly + infringes any patent, then any and all rights granted by such + Participant to You under Sections 2.1 and/or 2.2 of this License + shall, upon 60 days notice from Participant terminate prospectively, + unless if within 60 days after receipt of notice You either: (i) + agree in writing to pay Participant a mutually agreeable reasonable + royalty for Your past and future use of Modifications made by such + Participant, or (ii) withdraw Your litigation claim with respect to + the Contributor Version against such Participant. If within 60 days + of notice, a reasonable royalty and payment arrangement are not + mutually agreed upon in writing by the parties or the litigation claim + is not withdrawn, the rights granted by Participant to You under + Sections 2.1 and/or 2.2 automatically terminate at the expiration of + the 60 day notice period specified above. + + (b) any software, hardware, or device, other than such Participant's + Contributor Version, directly or indirectly infringes any patent, then + any rights granted to You by such Participant under Sections 2.1(b) + and 2.2(b) are revoked effective as of the date You first made, used, + sold, distributed, or had made, Modifications made by that + Participant. + + 8.3. If You assert a patent infringement claim against Participant + alleging that such Participant's Contributor Version directly or + indirectly infringes any patent where such claim is resolved (such as + by license or settlement) prior to the initiation of patent + infringement litigation, then the reasonable value of the licenses + granted by such Participant under Sections 2.1 or 2.2 shall be taken + into account in determining the amount or value of any payment or + license. + + 8.4. In the event of termination under Sections 8.1 or 8.2 above, + all end user license agreements (excluding distributors and resellers) + which have been validly granted by You or any distributor hereunder + prior to termination shall survive termination. + +9. LIMITATION OF LIABILITY. + + UNDER NO CIRCUMSTANCES AND UNDER NO LEGAL THEORY, WHETHER TORT + (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE, SHALL YOU, THE INITIAL + DEVELOPER, ANY OTHER CONTRIBUTOR, OR ANY DISTRIBUTOR OF COVERED CODE, + OR ANY SUPPLIER OF ANY OF SUCH PARTIES, BE LIABLE TO ANY PERSON FOR + ANY INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES OF ANY + CHARACTER INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOSS OF GOODWILL, + WORK STOPPAGE, COMPUTER FAILURE OR MALFUNCTION, OR ANY AND ALL OTHER + COMMERCIAL DAMAGES OR LOSSES, EVEN IF SUCH PARTY SHALL HAVE BEEN + INFORMED OF THE POSSIBILITY OF SUCH DAMAGES. THIS LIMITATION OF + LIABILITY SHALL NOT APPLY TO LIABILITY FOR DEATH OR PERSONAL INJURY + RESULTING FROM SUCH PARTY'S NEGLIGENCE TO THE EXTENT APPLICABLE LAW + PROHIBITS SUCH LIMITATION. SOME JURISDICTIONS DO NOT ALLOW THE + EXCLUSION OR LIMITATION OF INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO + THIS EXCLUSION AND LIMITATION MAY NOT APPLY TO YOU. + +10. U.S. GOVERNMENT END USERS. + + The Covered Code is a "commercial item," as that term is defined in + 48 C.F.R. 2.101 (Oct. 1995), consisting of "commercial computer + software" and "commercial computer software documentation," as such + terms are used in 48 C.F.R. 12.212 (Sept. 1995). Consistent with 48 + C.F.R. 12.212 and 48 C.F.R. 227.7202-1 through 227.7202-4 (June 1995), + all U.S. Government End Users acquire Covered Code with only those + rights set forth herein. + +11. MISCELLANEOUS. + + This License represents the complete agreement concerning subject + matter hereof. If any provision of this License is held to be + unenforceable, such provision shall be reformed only to the extent + necessary to make it enforceable. This License shall be governed by + California law provisions (except to the extent applicable law, if + any, provides otherwise), excluding its conflict-of-law provisions. + With respect to disputes in which at least one party is a citizen of, + or an entity chartered or registered to do business in the United + States of America, any litigation relating to this License shall be + subject to the jurisdiction of the Federal Courts of the Northern + District of California, with venue lying in Santa Clara County, + California, with the losing party responsible for costs, including + without limitation, court costs and reasonable attorneys' fees and + expenses. The application of the United Nations Convention on + Contracts for the International Sale of Goods is expressly excluded. + Any law or regulation which provides that the language of a contract + shall be construed against the drafter shall not apply to this + License. + +12. RESPONSIBILITY FOR CLAIMS. + + As between Initial Developer and the Contributors, each party is + responsible for claims and damages arising, directly or indirectly, + out of its utilization of rights under this License and You agree to + work with Initial Developer and Contributors to distribute such + responsibility on an equitable basis. Nothing herein is intended or + shall be deemed to constitute any admission of liability. + +13. MULTIPLE-LICENSED CODE. + + Initial Developer may designate portions of the Covered Code as + "Multiple-Licensed". "Multiple-Licensed" means that the Initial + Developer permits you to utilize portions of the Covered Code under + Your choice of the NPL or the alternative licenses, if any, specified + by the Initial Developer in the file described in Exhibit A. + +EXHIBIT A -Mozilla Public License. + + ``The contents of this file are subject to the Mozilla Public License + Version 1.1 (the "License"); you may not use this file except in + compliance with the License. You may obtain a copy of the License at + http://www.mozilla.org/MPL/ + + Software distributed under the License is distributed on an "AS IS" + basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the + License for the specific language governing rights and limitations + under the License. + + The Original Code is ______________________________________. + + The Initial Developer of the Original Code is ________________________. + Portions created by ______________________ are Copyright (C) ______ + _______________________. All Rights Reserved. + + Contributor(s): ______________________________________. + + Alternatively, the contents of this file may be used under the terms + of the _____ license (the "[___] License"), in which case the + provisions of [______] License are applicable instead of those + above. If you wish to allow use of your version of this file only + under the terms of the [____] License and not to allow others to use + your version of this file under the MPL, indicate your decision by + deleting the provisions above and replace them with the notice and + other provisions required by the [___] License. If you do not delete + the provisions above, a recipient may use your version of this file + under either the MPL or the [___] License." + + [NOTE: The text of this Exhibit A may differ slightly from the text of + the notices in the Source Code files of the Original Code. You should + use the text of this Exhibit A rather than the text found in the + Original Code Source Code for Your Modifications.] diff --git a/Makefile.am b/Makefile.am new file mode 100644 index 0000000..cdf1447 --- /dev/null +++ b/Makefile.am @@ -0,0 +1,5 @@ +SUBDIRS = src tests examples man $(CXXMPH) +EXTRA_DIST = cmph.spec configure.ac cmph.pc.in LGPL-2 MPL-1.1 + +pkgconfigdir = $(libdir)/pkgconfig +pkgconfig_DATA = cmph.pc diff --git a/NEWS b/NEWS new file mode 100644 index 0000000..e69de29 diff --git a/NEWSLOG.t2t b/NEWSLOG.t2t new file mode 100644 index 0000000..b74bf2a --- /dev/null +++ b/NEWSLOG.t2t @@ -0,0 +1,85 @@ +News Log + + +%!includeconf: CONFIG.t2t + +---------------------------------------- + +==News for version 1.1== + +Fixed a bug in the chd_pc algorithm and reorganized tests. + +==News for version 1.0== + +This is a bugfix only version, after which a revamp of the cmph code and +algorithms will be done. + +---------------------------------------- + +==News for version 0.9== + +- [The CHD algorithm chd.html], which is an algorithm that can be tuned to generate MPHFs that require approximately 2.07 bits per key to be stored. The algorithm outperforms [the BDZ algorithm bdz.html] and therefore is the fastest one available in the literature for sets that can be treated in internal memory. +- [The CHD_PH algorithm chd.html], which is an algorithm to generate PHFs with load factor up to //99 %//. It is actually the CHD algorithm without the ranking step. If we set the load factor to //81 %//, which is the maximum that can be obtained with [the BDZ algorithm bdz.html], the resulting functions can be stored in //1.40// bits per key. The space requirement increases with the load factor. +- All reported bugs and suggestions have been corrected and included as well. + + +---------------------------------------- + +==News for version 0.8== + +- [An algorithm to generate MPHFs that require around 2.6 bits per key to be stored bdz.html], which is referred to as BDZ algorithm. The algorithm is the fastest one available in the literature for sets that can be treated in internal memory. +- [An algorithm to generate PHFs with range m = cn, for c > 1.22 bdz.html], which is referred to as BDZ_PH algorithm. It is actually the BDZ algorithm without the ranking step. The resulting functions can be stored in 1.95 bits per key for //c = 1.23// and are considerably faster than the MPHFs generated by the BDZ algorithm. +- An adapter to support a vector of struct as the source of keys has been added. +- An API to support the ability of packing a perfect hash function into a preallocated contiguous memory space. The computation of a packed function is still faster and can be easily mmapped. +- The hash functions djb2, fnv and sdbm were removed because they do not use random seeds and therefore are not useful for MPHFs algorithms. +- All reported bugs and suggestions have been corrected and included as well. + + +---------------------------------------- + +==News for version 0.7== + +- Added man pages and a pkgconfig file. + + +---------------------------------------- + +==News for version 0.6== + +- [An algorithm to generate MPHFs that require less than 4 bits per key to be stored fch.html], which is referred to as FCH algorithm. The algorithm is only efficient for small sets. +- The FCH algorithm is integrated with [BRZ algorithm brz.html] so that you will be able to efficiently generate space-efficient MPHFs for sets in the order of billion keys. +- All reported bugs and suggestions have been corrected and included as well. + + +---------------------------------------- + +==News for version 0.5== + +- A thread safe vector adapter has been added. +- [A new algorithm for sets in the order of billion of keys that requires approximately 8.1 bits per key to store the resulting MPHFs. brz.html] +- All reported bugs and suggestions have been corrected and included as well. + + +---------------------------------------- + +==News for version 0.4== + +- Vector Adapter has been added. +- An optimized version of bmz (bmz8) for small set of keys (at most 256 keys) has been added. +- All reported bugs and suggestions have been corrected and included as well. + + +---------------------------------------- + +==News for version 0.3== + +- New heuristic added to the bmz algorithm permits to generate a mphf with only + //24.80n + O(1)// bytes. The resulting function can be stored in //3.72n// bytes. +%html% [click here bmz.html#heuristic] for details. + + +%!include: ALGORITHMS.t2t + +%!include: FOOTER.t2t + +%!include(html): ''GOOGLEANALYTICS.t2t'' diff --git a/README.t2t b/README.t2t new file mode 100644 index 0000000..21d851f --- /dev/null +++ b/README.t2t @@ -0,0 +1,306 @@ +CMPH - C Minimal Perfect Hashing Library + + +%!includeconf: CONFIG.t2t + +------------------------------------------------------------------- + +==Motivation== + +A perfect hash function maps a static set of n keys into a set of m integer numbers without collisions, where m is greater than or equal to n. If m is equal to n, the function is called minimal. + +[Minimal perfect hash functions concepts.html] are widely used for memory efficient storage and fast retrieval of items from static sets, such as words in natural languages, reserved words in programming languages or interactive systems, universal resource locations (URLs) in Web search engines, or item sets in data mining techniques. Therefore, there are applications for minimal perfect hash functions in information retrieval systems, database systems, language translation systems, electronic commerce systems, compilers, operating systems, among others. + +The use of minimal perfect hash functions is, until now, restricted to scenarios where the set of keys being hashed is small, because of the limitations of current algorithms. But in many cases, to deal with huge set of keys is crucial. So, this project gives to the free software community an API that will work with sets in the order of billion of keys. + +Probably, the most interesting application for minimal perfect hash functions is its use as an indexing structure for databases. The most popular data structure used as an indexing structure in databases is the B+ tree. In fact, the B+ tree is very used for dynamic applications with frequent insertions and deletions of records. However, for applications with sporadic modifications and a huge number of queries the B+ tree is not the best option, because practical deployments of this structure are extremely complex, and perform poorly with very large sets of keys such as those required for the new frontiers [database applications http://acmqueue.com/modules.php?name=Content&pa=showpage&pid=299]. + +For example, in the information retrieval field, the work with huge collections is a daily task. The simple assignment of ids to web pages of a collection can be a challenging task. While traditional databases simply cannot handle more traffic once the working set of web page urls does not fit in main memory anymore, minimal perfect hash functions can easily scale to hundred of millions of entries, using stock hardware. + +As there are lots of applications for minimal perfect hash functions, it is important to implement memory and time efficient algorithms for constructing such functions. The lack of similar libraries in the free software world has been the main motivation to create the C Minimal Perfect Hashing Library ([gperf is a bit different gperf.html], since it was conceived to create very fast perfect hash functions for small sets of keys and CMPH Library was conceived to create minimal perfect hash functions for very large sets of keys). C Minimal Perfect Hashing Library is a portable LGPLed library to generate and to work with very efficient minimal perfect hash functions. + +------------------------------------------------------------------- + +==Description== + +The CMPH Library encapsulates the newest and more efficient algorithms in an easy-to-use, production-quality, fast API. The library was designed to work with big entries that cannot fit in the main memory. It has been used successfully for constructing minimal perfect hash functions for sets with more than 100 million of keys, and we intend to expand this number to the order of billion of keys. Although there is a lack of similar libraries, we can point out some of the distinguishable features of the CMPH Library: + +- Fast. +- Space-efficient with main memory usage carefully documented. +- The best modern algorithms are available (or at least scheduled for implementation :-)). +- Works with in-disk key sets through of using the adapter pattern. +- Serialization of hash functions. +- Portable C code (currently works on GNU/Linux and WIN32 and is reported to work in OpenBSD and Solaris). +- Object oriented implementation. +- Easily extensible. +- Well encapsulated API aiming binary compatibility through releases. +- Free Software. + + +---------------------------------------- + +==Supported Algorithms== + + +%html% - [CHD Algorithm chd.html]: +%txt% - CHD Algorithm: + - It is the fastest algorithm to build PHFs and MPHFs in linear time. + - It generates the most compact PHFs and MPHFs we know of. + - It can generate PHFs with a load factor up to //99 %//. + - It can be used to generate //t//-perfect hash functions. A //t//-perfect hash function allows at most //t// collisions in a given bin. It is a well-known fact that modern memories are organized as blocks which constitute transfer unit. Example of such blocks are cache lines for internal memory or sectors for hard disks. Thus, it can be very useful for devices that carry out I/O operations in blocks. + - It is a two level scheme. It uses a first level hash function to split the key set in buckets of average size determined by a parameter //b// in the range //[1,32]//. In the second level it uses displacement values to resolve the collisions that have given rise to the buckets. + - It can generate MPHFs that can be stored in approximately //2.07// bits per key. + - For a load factor equal to the maximum one that is achieved by the BDZ algorithm (//81 %//), the resulting PHFs are stored in approximately //1.40// bits per key. +%html% - [BDZ Algorithm bdz.html]: +%txt% - BDZ Algorithm: + - It is very simple and efficient. It outperforms all the ones below. + - It constructs both PHFs and MPHFs in linear time. + - The maximum load factor one can achieve for a PHF is //1/1.23//. + - It is based on acyclic random 3-graphs. A 3-graph is a generalization of a graph where each edge connects 3 vertices instead of only 2. + - The resulting MPHFs are not order preserving. + - The resulting MPHFs can be stored in only //(2 + x)cn// bits, where //c// should be larger than or equal to //1.23// and //x// is a constant larger than //0// (actually, x = 1/b and b is a parameter that should be larger than 2). For //c = 1.23// and //b = 8//, the resulting functions are stored in approximately 2.6 bits per key. + - For its maximum load factor (//81 %//), the resulting PHFs are stored in approximately //1.95// bits per key. +%html% - [BMZ Algorithm bmz.html]: +%txt% - BMZ Algorithm: + - Construct MPHFs in linear time. + - It is based on cyclic random graphs. This makes it faster than the CHM algorithm. + - The resulting MPHFs are not order preserving. + - The resulting MPHFs are more compact than the ones generated by the CHM algorithm and can be stored in //4cn// bytes, where //c// is in the range //[0.93,1.15]//. +%html% - [BRZ Algorithm brz.html]: +%txt% - BRZ Algorithm: + - A very fast external memory based algorithm for constructing minimal perfect hash functions for sets in the order of billions of keys. + - It works in linear time. + - The resulting MPHFs are not order preserving. + - The resulting MPHFs can be stored using less than //8.0// bits per key. +%html% - [CHM Algorithm chm.html]: +%txt% - CHM Algorithm: + - Construct minimal MPHFs in linear time. + - It is based on acyclic random graphs + - The resulting MPHFs are order preserving. + - The resulting MPHFs are stored in //4cn// bytes, where //c// is greater than 2. +%html% - [FCH Algorithm fch.html]: +%txt% - FCH Algorithm: + - Construct minimal perfect hash functions that require less than 4 bits per key to be stored. + - The resulting MPHFs are very compact and very efficient at evaluation time + - The algorithm is only efficient for small sets. + - It is used as internal algorithm in the BRZ algorithm to efficiently solve larger problems and even so to generate MPHFs that require approximately 4.1 bits per key to be stored. For that, you just need to set the parameters -a to brz and -c to a value larger than or equal to 2.6. + + +---------------------------------------- + +==News for version 1.1== + +Fixed a bug in the chd_pc algorithm and reorganized tests. + +==News for version 1.0== + +This is a bugfix only version, after which a revamp of the cmph code and +algorithms will be done. + +==News for version 0.9== + +- [The CHD algorithm chd.html], which is an algorithm that can be tuned to generate MPHFs that require approximately 2.07 bits per key to be stored. The algorithm outperforms [the BDZ algorithm bdz.html] and therefore is the fastest one available in the literature for sets that can be treated in internal memory. +- [The CHD_PH algorithm chd.html], which is an algorithm to generate PHFs with load factor up to //99 %//. It is actually the CHD algorithm without the ranking step. If we set the load factor to //81 %//, which is the maximum that can be obtained with [the BDZ algorithm bdz.html], the resulting functions can be stored in //1.40// bits per key. The space requirement increases with the load factor. +- All reported bugs and suggestions have been corrected and included as well. + + + +==News for version 0.8 == + +- [An algorithm to generate MPHFs that require around 2.6 bits per key to be stored bdz.html], which is referred to as BDZ algorithm. The algorithm is the fastest one available in the literature for sets that can be treated in internal memory. +- [An algorithm to generate PHFs with range m = cn, for c > 1.22 bdz.html], which is referred to as BDZ_PH algorithm. It is actually the BDZ algorithm without the ranking step. The resulting functions can be stored in 1.95 bits per key for //c = 1.23// and are considerably faster than the MPHFs generated by the BDZ algorithm. +- An adapter to support a vector of struct as the source of keys has been added. +- An API to support the ability of packing a perfect hash function into a preallocated contiguous memory space. The computation of a packed function is still faster and can be easily mmapped. +- The hash functions djb2, fnv and sdbm were removed because they do not use random seeds and therefore are not useful for MPHFs algorithms. +- All reported bugs and suggestions have been corrected and included as well. + + + +[News log newslog.html] +---------------------------------------- + +==Examples== + +Using cmph is quite simple. Take a look. + + +``` +#include +#include +// Create minimal perfect hash function from in-memory vector +int main(int argc, char **argv) +{ + + // Creating a filled vector + unsigned int i = 0; + const char *vector[] = {"aaaaaaaaaa", "bbbbbbbbbb", "cccccccccc", "dddddddddd", "eeeeeeeeee", + "ffffffffff", "gggggggggg", "hhhhhhhhhh", "iiiiiiiiii", "jjjjjjjjjj"}; + unsigned int nkeys = 10; + FILE* mphf_fd = fopen("temp.mph", "w"); + // Source of keys + cmph_io_adapter_t *source = cmph_io_vector_adapter((char **)vector, nkeys); + + //Create minimal perfect hash function using the brz algorithm. + cmph_config_t *config = cmph_config_new(source); + cmph_config_set_algo(config, CMPH_BRZ); + cmph_config_set_mphf_fd(config, mphf_fd); + cmph_t *hash = cmph_new(config); + cmph_config_destroy(config); + cmph_dump(hash, mphf_fd); + cmph_destroy(hash); + fclose(mphf_fd); + + //Find key + mphf_fd = fopen("temp.mph", "r"); + hash = cmph_load(mphf_fd); + while (i < nkeys) { + const char *key = vector[i]; + unsigned int id = cmph_search(hash, key, (cmph_uint32)strlen(key)); + fprintf(stderr, "key:%s -- hash:%u\n", key, id); + i++; + } + + //Destroy hash + cmph_destroy(hash); + cmph_io_vector_adapter_destroy(source); + fclose(mphf_fd); + return 0; +} +``` +Download [vector_adapter_ex1.c examples/vector_adapter_ex1.c]. This example does not work in versions below 0.6. You need to update the sources from GIT to make it work. +------------------------------- + +``` +#include +#include +#include + // Create minimal perfect hash function from in-disk keys using BDZ algorithm +int main(int argc, char **argv) +{ + //Open file with newline separated list of keys + FILE * keys_fd = fopen("keys.txt", "r"); + cmph_t *hash = NULL; + if (keys_fd == NULL) + { + fprintf(stderr, "File \"keys.txt\" not found\n"); + exit(1); + } + // Source of keys + cmph_io_adapter_t *source = cmph_io_nlfile_adapter(keys_fd); + + cmph_config_t *config = cmph_config_new(source); + cmph_config_set_algo(config, CMPH_BDZ); + hash = cmph_new(config); + cmph_config_destroy(config); + + //Find key + const char *key = "jjjjjjjjjj"; + unsigned int id = cmph_search(hash, key, (cmph_uint32)strlen(key)); + fprintf(stderr, "Id:%u\n", id); + //Destroy hash + cmph_destroy(hash); + cmph_io_nlfile_adapter_destroy(source); + fclose(keys_fd); + return 0; +} +``` +Download [file_adapter_ex2.c examples/file_adapter_ex2.c] and [keys.txt examples/keys.txt]. This example does not work in versions below 0.8. You need to update the sources from GIT to make it work. + +[Click here to see more examples examples.html] +-------------------------------------- + + + +==The cmph application== + +cmph is the name of both the library and the utility +application that comes with this package. You can use the cmph +application for constructing minimal perfect hash functions from the command line. +The cmph utility +comes with a number of flags, but it is very simple to create and to query +minimal perfect hash functions: + +``` + $ # Using the chm algorithm (default one) for constructing a mphf for keys in file keys_file + $ ./cmph -g keys_file + $ # Query id of keys in the file keys_query + $ ./cmph -m keys_file.mph keys_query +``` + +The additional options let you set most of the parameters you have +available through the C API. Below you can see the full help message for the +utility. + + +``` +usage: cmph [-v] [-h] [-V] [-k nkeys] [-f hash_function] [-g [-c algorithm_dependent_value][-s seed] ] + [-a algorithm] [-M memory_in_MB] [-b algorithm_dependent_value] [-t keys_per_bin] [-d tmp_dir] + [-m file.mph] keysfile +Minimum perfect hashing tool + + -h print this help message + -c c value determines: + * the number of vertices in the graph for the algorithms BMZ and CHM + * the number of bits per key required in the FCH algorithm + * the load factor in the CHD_PH algorithm + -a algorithm - valid values are + * bmz + * bmz8 + * chm + * brz + * fch + * bdz + * bdz_ph + * chd_ph + * chd + -f hash function (may be used multiple times) - valid values are + * jenkins + -V print version number and exit + -v increase verbosity (may be used multiple times) + -k number of keys + -g generation mode + -s random seed + -m minimum perfect hash function file + -M main memory availability (in MB) used in BRZ algorithm + -d temporary directory used in BRZ algorithm + -b the meaning of this parameter depends on the algorithm selected in the -a option: + * For BRZ it is used to make the maximal number of keys in a bucket lower than 256. + In this case its value should be an integer in the range [64,175]. Default is 128. + + * For BDZ it is used to determine the size of some precomputed rank + information and its value should be an integer in the range [3,10]. Default + is 7. The larger is this value, the more compact are the resulting functions + and the slower are them at evaluation time. + + * For CHD and CHD_PH it is used to set the average number of keys per bucket + and its value should be an integer in the range [1,32]. Default is 4. The + larger is this value, the slower is the construction of the functions. + This parameter has no effect for other algorithms. + + -t set the number of keys per bin for a t-perfect hashing function. A t-perfect + hash function allows at most t collisions in a given bin. This parameter applies + only to the CHD and CHD_PH algorithms. Its value should be an integer in the + range [1,128]. Defaul is 1 + keysfile line separated file with keys +``` + +==Additional Documentation== + +[FAQ faq.html] + +==Downloads== + +Use the project page at sourceforge: http://sf.net/projects/cmph + + +==License Stuff== + +Code is under the LGPL and the MPL 1.1. +---------------------------------------- + +%!include: FOOTER.t2t + +%!include(html): ''LOGO.t2t'' +Last Updated: %%date(%c) + +%!include(html): ''GOOGLEANALYTICS.t2t'' diff --git a/TABLE1.t2t b/TABLE1.t2t new file mode 100644 index 0000000..402a854 --- /dev/null +++ b/TABLE1.t2t @@ -0,0 +1,76 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+Characteristics Algorithms
+ + BMZ CHM
+ +$c$ 1.15 2.09
+$\vert E(G)\vert$ $n$ $n$
+$\vert V(G)\vert=\vert g\vert$ $cn$ $cn$
+ +$\vert E(G_{\rm crit})\vert$ $0.5\vert E(G)\vert$ 0
+$G$ cyclic acyclic
+Order preserving no yes
\ No newline at end of file diff --git a/TABLE4.t2t b/TABLE4.t2t new file mode 100644 index 0000000..350fa1e --- /dev/null +++ b/TABLE4.t2t @@ -0,0 +1,109 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
$n$ BMZ +CHM algorithm Gain
+ $N_i$ Map+Ord +Search Total +$N_i$ Map+Ord Search +Total (%)
1,562,500 2.28 8.54 2.37 10.91 2.70 14.56 1.57 16.13 48
3,125,000 2.16 15.92 4.88 20.80 2.85 30.36 3.20 33.56 61
6,250,000 2.20 33.09 10.48 43.57 2.90 62.26 6.76 69.02 58
12,500,000 2.00 63.26 23.04 86.30 2.60 117.99 14.94 132.92 54
25,000,000 2.00 130.79 51.55 182.34 2.80 262.05 33.68 295.73 62
50,000,000 2.07 273.75 114.12 387.87 2.90 577.59 73.97 651.56 68
100,000,000 2.07 567.47 243.13 810.60 2.80 1,131.06 157.23 1,288.29 59
diff --git a/TABLE5.t2t b/TABLE5.t2t new file mode 100644 index 0000000..8cf966a --- /dev/null +++ b/TABLE5.t2t @@ -0,0 +1,46 @@ + + + + + + + + + + + + + + + + + + + + + + + + + +
$n$ BMZ $c=1.00$ + BMZ $c=0.93$
+ $N_i$ Map+Ord +Search Total +$N_i$ Map+Ord Search +Total
12,500,000 2.78 76.68 25.06 101.74 3.04 76.39 25.80 102.19
\ No newline at end of file diff --git a/TABLEBRZ1.t2t b/TABLEBRZ1.t2t new file mode 100644 index 0000000..e8a021f --- /dev/null +++ b/TABLEBRZ1.t2t @@ -0,0 +1,72 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+$n$ (millions) 1 2 4 8 16 32
+ + Average time (s) $6.1 \pm 0.3$ $12.2 \pm 0.6$ $25.4 \pm 1.1$ $51.4 \pm 2.0$ $117.3 \pm 4.4$ $262.2 \pm 8.7$
+ SD (s) $2.6$ $5.4$ $9.8$ $17.6$ $37.3$ $76.3$
diff --git a/TABLEBRZ2.t2t b/TABLEBRZ2.t2t new file mode 100644 index 0000000..a72094c --- /dev/null +++ b/TABLEBRZ2.t2t @@ -0,0 +1,133 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+$n$ (millions) 1 2 4 8 16
+ Average time (s) $6.9 \pm 0.3$ $13.8 \pm 0.2$ $31.9 \pm 0.7$ $69.9 \pm 1.1$ $140.6 \pm 2.5$
+ SD $0.4$ $0.2$ $0.9$ $1.5$ $3.5$
+ + $n$ (millions) 32 64 128 512 1000
+ Average time (s) $284.3 \pm 1.1$ $587.9 \pm 3.9$ + $1223.6 \pm 4.9$ + $5966.4 \pm 9.5$ + $13229.5 \pm 12.7$
+ SD $1.6$ $5.5$ $6.8$ $13.2$ $18.6$
diff --git a/TABLEBRZ3.t2t b/TABLEBRZ3.t2t new file mode 100644 index 0000000..516dcab --- /dev/null +++ b/TABLEBRZ3.t2t @@ -0,0 +1,147 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+$\mu $ (MB) $100$ $200$ $300$ $400$ $500$ $600$
+ + $N$ (files) $619$ $310$ $207$ $155$ $124$ $104$
+  (buffer size in KB) $165$ $661$ $1,484$ $2,643$ $4,129$ $5,908$
+ $\beta$/ (# of seeks in the worst case) $384,478$ $95,974$ $42,749$ $24,003$ $15,365$ $10,738$
+ Time (hours) $4.04$ $3.64$ $3.34$ $3.20$ $3.13$ $3.09$
diff --git a/acinclude.m4 b/acinclude.m4 new file mode 100644 index 0000000..e926f46 --- /dev/null +++ b/acinclude.m4 @@ -0,0 +1,207 @@ +AC_DEFUN([AC_ENABLE_CXXMPH], [AC_ARG_ENABLE([cxxmph], + [ --enable-cxxmph enable the c++ cxxmph library ], + [case "${enableval}" in + yes) cxxmph=true ;; + no) cxxmph=false ;; + *) AC_MSG_ERROR([bad value ${enableval} for --enable-cxxmph]) ;; + esac],[cxxmph=false])]) + +AC_DEFUN([AC_CHECK_SPOON], [ + AC_ARG_WITH(spoon, [ --with-spoon=SPOON this is inocuous, since the truth is that there is no spoon ]) + AC_MSG_CHECKING(if there is spoon) + AC_MSG_RESULT(no) +]) + +dnl Check for baseline language coverage in the compiler for the C++0x standard. +# AC_COMPILE_STDCXX_OX +AC_DEFUN([AC_COMPILE_STDCXX_0X], [ + AC_CACHE_CHECK(if compiler supports C++0x features without additional flags, + ac_cv_cxx_compile_cxx0x_native, + [AC_LANG_SAVE + AC_LANG_CPLUSPLUS + AC_TRY_COMPILE([ + #include + #include + template + struct check + { + static_assert(sizeof(int) <= sizeof(T), "not big enough"); + }; + + typedef check> right_angle_brackets; + + int a; + decltype(a) b; + ],, + ac_cv_cxx_compile_cxx0x_native=yes, ac_cv_cxx_compile_cxx0x_native=no) + AC_LANG_RESTORE + ]) + + AC_CACHE_CHECK(if compiler supports C++0x features with -std=c++0x, + ac_cv_cxx_compile_cxx0x_cxx, + [AC_LANG_SAVE + AC_LANG_CPLUSPLUS + ac_save_CXXFLAGS="$CXXFLAGS" + CXXFLAGS="$CXXFLAGS -std=c++0x" + AC_TRY_COMPILE([ + #include + template + struct check + { + static_assert(sizeof(int) <= sizeof(T), "not big enough"); + }; + + typedef check> right_angle_brackets; + + int a; + decltype(a) b;],, + ac_cv_cxx_compile_cxx0x_cxx=yes, ac_cv_cxx_compile_cxx0x_cxx=no) + CXXFLAGS="$ac_save_CXXFLAGS" + AC_LANG_RESTORE + ]) + + AC_CACHE_CHECK(if compiler supports C++0x features with -std=gnu++0x, + ac_cv_cxx_compile_cxx0x_gxx, + [AC_LANG_SAVE + AC_LANG_CPLUSPLUS + ac_save_CXXFLAGS="$CXXFLAGS" + CXXFLAGS="$CXXFLAGS -std=gnu++0x" + AC_TRY_COMPILE([ + #include + template + struct check + { + static_assert(sizeof(int) <= sizeof(T), "not big enough"); + }; + + typedef check> right_angle_brackets; + + int a; + decltype(a) b;],, + ac_cv_cxx_compile_cxx0x_gxx=yes, ac_cv_cxx_compile_cxx0x_gxx=no) + CXXFLAGS="$ac_save_CXXFLAGS" + AC_LANG_RESTORE + ]) + + if test "$ac_cv_cxx_compile_cxx0x_native" = yes || + test "$ac_cv_cxx_compile_cxx0x_cxx" = yes || + test "$ac_cv_cxx_compile_cxx0x_gxx" = yes; then + AC_DEFINE(HAVE_STDCXX_0X,,[Define if g++ supports C++0x features. ]) + fi +]) + +dnl By default, many hosts won't let programs access large files; +dnl one must use special compiler options to get large-file access to work. +dnl For more details about this brain damage please see: +dnl http://www.sas.com/standards/large.file/x_open.20Mar96.html + +dnl Written by Paul Eggert . + +dnl Internal subroutine of AC_SYS_EXTRA_LARGEFILE. +dnl AC_SYS_EXTRA_LARGEFILE_FLAGS(FLAGSNAME) +AC_DEFUN([AC_SYS_EXTRA_LARGEFILE_FLAGS], + [AC_CACHE_CHECK([for $1 value to request large file support], + ac_cv_sys_largefile_$1, + [ac_cv_sys_largefile_$1=`($GETCONF LFS_$1) 2>/dev/null` || { + ac_cv_sys_largefile_$1=no + ifelse($1, CFLAGS, + [case "$host_os" in + # IRIX 6.2 and later require cc -n32. +changequote(, )dnl + irix6.[2-9]* | irix6.1[0-9]* | irix[7-9].* | irix[1-9][0-9]*) +changequote([, ])dnl + if test "$GCC" != yes; then + ac_cv_sys_largefile_CFLAGS=-n32 + fi + ac_save_CC="$CC" + CC="$CC $ac_cv_sys_largefile_CFLAGS" + AC_TRY_LINK(, , , ac_cv_sys_largefile_CFLAGS=no) + CC="$ac_save_CC" + esac]) + }])]) + +dnl Internal subroutine of AC_SYS_EXTRA_LARGEFILE. +dnl AC_SYS_EXTRA_LARGEFILE_SPACE_APPEND(VAR, VAL) +AC_DEFUN([AC_SYS_EXTRA_LARGEFILE_SPACE_APPEND], + [case $2 in + no) ;; + ?*) + case "[$]$1" in + '') $1=$2 ;; + *) $1=[$]$1' '$2 ;; + esac ;; + esac]) + +dnl Internal subroutine of AC_SYS_EXTRA_LARGEFILE. +dnl AC_SYS_EXTRA_LARGEFILE_MACRO_VALUE(C-MACRO, CACHE-VAR, COMMENT, CODE-TO-SET-DEFAULT) +AC_DEFUN([AC_SYS_EXTRA_LARGEFILE_MACRO_VALUE], + [AC_CACHE_CHECK([for $1], $2, + [$2=no +changequote(, )dnl + $4 + for ac_flag in $ac_cv_sys_largefile_CFLAGS no; do + case "$ac_flag" in + -D$1) + $2=1 ;; + -D$1=*) + $2=`expr " $ac_flag" : '[^=]*=\(.*\)'` ;; + esac + done +changequote([, ])dnl + ]) + if test "[$]$2" != no; then + AC_DEFINE_UNQUOTED([$1], [$]$2, [$3]) + fi]) + +AC_DEFUN([AC_SYS_EXTRA_LARGEFILE], + [AC_REQUIRE([AC_CANONICAL_HOST]) + AC_ARG_ENABLE(largefile, + [ --disable-largefile omit support for large files]) + if test "$enable_largefile" != no; then + AC_CHECK_TOOL(GETCONF, getconf) + AC_SYS_EXTRA_LARGEFILE_FLAGS(CFLAGS) + AC_SYS_EXTRA_LARGEFILE_FLAGS(LDFLAGS) + AC_SYS_EXTRA_LARGEFILE_FLAGS(LIBS) + + for ac_flag in $ac_cv_sys_largefile_CFLAGS no; do + case "$ac_flag" in + no) ;; + -D_FILE_OFFSET_BITS=*) ;; + -D_LARGEFILE_SOURCE | -D_LARGEFILE_SOURCE=*) ;; + -D_LARGE_FILES | -D_LARGE_FILES=*) ;; + -D?* | -I?*) + AC_SYS_EXTRA_LARGEFILE_SPACE_APPEND(CPPFLAGS, "$ac_flag") ;; + *) + AC_SYS_EXTRA_LARGEFILE_SPACE_APPEND(CFLAGS, "$ac_flag") ;; + esac + done + AC_SYS_EXTRA_LARGEFILE_SPACE_APPEND(LDFLAGS, "$ac_cv_sys_largefile_LDFLAGS") + AC_SYS_EXTRA_LARGEFILE_SPACE_APPEND(LIBS, "$ac_cv_sys_largefile_LIBS") + AC_SYS_EXTRA_LARGEFILE_MACRO_VALUE(_FILE_OFFSET_BITS, + ac_cv_sys_file_offset_bits, + [Number of bits in a file offset, on hosts where this is settable.]) + [case "$host_os" in + # HP-UX 10.20 and later + hpux10.[2-9][0-9]* | hpux1[1-9]* | hpux[2-9][0-9]*) + ac_cv_sys_file_offset_bits=64 ;; + esac] + AC_SYS_EXTRA_LARGEFILE_MACRO_VALUE(_LARGEFILE_SOURCE, + ac_cv_sys_largefile_source, + [Define to make fseeko etc. visible, on some hosts.], + [case "$host_os" in + # HP-UX 10.20 and later + hpux10.[2-9][0-9]* | hpux1[1-9]* | hpux[2-9][0-9]*) + ac_cv_sys_largefile_source=1 ;; + esac]) + AC_SYS_EXTRA_LARGEFILE_MACRO_VALUE(_LARGE_FILES, + ac_cv_sys_large_files, + [Define for large files, on AIX-style hosts.], + [case "$host_os" in + # AIX 4.2 and later + aix4.[2-9]* | aix4.1[0-9]* | aix[5-9].* | aix[1-9][0-9]*) + ac_cv_sys_large_files=1 ;; + esac]) + fi + ]) + + diff --git a/cmph.pc.in b/cmph.pc.in new file mode 100644 index 0000000..6eb21c3 --- /dev/null +++ b/cmph.pc.in @@ -0,0 +1,12 @@ +url=http://cmph.sourceforge.net/ +prefix=@prefix@ +exec_prefix=@exec_prefix@ +libdir=@libdir@ +includedir=@includedir@ + +Name: cmph +Description: minimal perfect hashing library +Version: @VERSION@ +Libs: -L${libdir} -lcmph +Cflags: -I${includedir} +URL: ${url} diff --git a/cmph.spec b/cmph.spec new file mode 100644 index 0000000..d6c239e --- /dev/null +++ b/cmph.spec @@ -0,0 +1,39 @@ +%define name cmph +%define version 0.4 +%define release 3 + +Name: %{name} +Version: %{version} +Release: %{release} +Summary: C Minimal perfect hash library +Source: %{name}-%{version}.tar.gz +License: Proprietary +URL: http://www.akwan.com.br +BuildArch: i386 +Group: Sitesearch +BuildRoot: %{_tmppath}/%{name}-root + +%description +C Minimal perfect hash library + +%prep +rm -Rf $RPM_BUILD_ROOT +rm -rf $RPM_BUILD_ROOT +%setup +mkdir $RPM_BUILD_ROOT +mkdir $RPM_BUILD_ROOT/usr +CXXFLAGS="-O2" ./configure --prefix=/usr/ + +%build +make + +%install +DESTDIR=$RPM_BUILD_ROOT make install + +%files +%defattr(755,root,root) +/ + +%changelog +* Tue Jun 1 2004 Davi de Castro Reis ++ Initial build diff --git a/cmph.vcproj b/cmph.vcproj new file mode 100644 index 0000000..d7e925b --- /dev/null +++ b/cmph.vcproj @@ -0,0 +1,210 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/cmphapp.vcproj b/cmphapp.vcproj new file mode 100644 index 0000000..bf92899 --- /dev/null +++ b/cmphapp.vcproj @@ -0,0 +1,141 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/configure.ac b/configure.ac new file mode 100644 index 0000000..4fe517b --- /dev/null +++ b/configure.ac @@ -0,0 +1,57 @@ +dnl Process this file with autoconf to produce a configure script. +AC_INIT +AC_CONFIG_SRCDIR([Makefile.am]) +AM_INIT_AUTOMAKE(cmph, 1.0) +AC_CONFIG_HEADERS([config.h]) +AC_CONFIG_MACRO_DIR([m4]) + +dnl Checks for programs. +AC_PROG_AWK +AC_PROG_CC +AC_PROG_INSTALL +AC_PROG_LN_S +LT_INIT +AC_SYS_EXTRA_LARGEFILE +if test "x$ac_cv_sys_largefile_CFLAGS" = "xno" ; then + ac_cv_sys_largefile_CFLAGS="" +fi +if test "x$ac_cv_sys_largefile_LDFLAGS" = "xno" ; then + ac_cv_sys_largefile_LDFLAGS="" +fi +if test "x$ac_cv_sys_largefile_LIBS" = "xno" ; then + ac_cv_sys_largefile_LIBS="" +fi +CFLAGS="$ac_cv_sys_largefile_CFLAGS $CFLAGS" +LDFLAGS="$ac_cv_sys_largefile_LDFLAGS $LDFLAGS" +LIBS="$LIBS $ac_cv_sys_largefile_LIBS" + +dnl Checks for headers +AC_CHECK_HEADERS([getopt.h math.h]) + +dnl Checks for libraries. +LT_LIB_M +LDFLAGS="$LIBM $LDFLAGS" +CFLAGS="-Wall" + +AC_PROG_CXX +CXXFLAGS="-Wall -Wno-unused-function -DNDEBUG -O3 -fomit-frame-pointer $CXXFLAGS" +AC_ENABLE_CXXMPH +if test x$cxxmph = xtrue; then + AC_COMPILE_STDCXX_0X + if test x$ac_cv_cxx_compile_cxx0x_native = "xno"; then + if test x$ac_cv_cxx_compile_cxx0x_cxx = "xyes"; then + CXXFLAGS="$CXXFLAGS -std=c++0x" + elif test x$ac_cv_cxx_compile_cxx0x_gxx = "xyes"; then + CXXFLAGS="$CXXFLAGS -std=gnu++0x" + else + AC_MSG_ERROR("cxxmph demands a working c++0x compiler.") + fi + fi + AC_SUBST([CXXMPH], "cxxmph") +fi + +AC_CHECK_SPOON +dnl AC_CONFIG_FILES([Makefile tests/Makefile samples/Makefile]) +AC_OUTPUT +AC_CONFIG_FILES([Makefile src/Makefile cxxmph/Makefile tests/Makefile examples/Makefile man/Makefile cmph.pc]) +AC_OUTPUT diff --git a/cxxmph/Makefile.am b/cxxmph/Makefile.am new file mode 100644 index 0000000..04ba47e --- /dev/null +++ b/cxxmph/Makefile.am @@ -0,0 +1,38 @@ +TESTS = $(check_PROGRAMS) +check_PROGRAMS = seeded_hash_test mph_bits_test hollow_iterator_test mph_map_test mph_index_test trigraph_test map_tester_test +noinst_PROGRAMS = bm_index bm_map +bin_PROGRAMS = cxxmph +lib_LTLIBRARIES = libcxxmph.la +libcxxmph_la_SOURCES = MurmurHash3.h MurmurHash3.cpp trigragh.h trigraph.cc mph_bits.h mph_bits.cc mph_index.h mph_index.cc seeded_hash.h stringpiece.h benchmark.h benchmark.cc +libcxxmph_la_LDFLAGS = -version-info 0:0:0 +cxxmph_includedir = $(includedir)/cxxmph/ +cxxmph_include_HEADERS = mph_map.h mph_index.h MurmurHash3.h trigraph.h seeded_hash.h stringpiece.h hollow_iterator.h + +mph_map_test_LDADD = libcxxmph.la +mph_map_test_SOURCES = mph_map_test.cc + +mph_index_test_LDADD = libcxxmph.la +mph_index_test_SOURCES = mph_index_test.cc + +bm_index_LDADD = libcxxmph.la -lcmph +bm_index_SOURCES = bm_common.cc bm_index.cc + +trigraph_test_LDADD = libcxxmph.la +trigraph_test_SOURCES = trigraph_test.cc + +bm_map_LDADD = libcxxmph.la +bm_map_SOURCES = bm_common.cc bm_map.cc + +cxxmph_LDADD = libcxxmph.la +cxxmph_SOURCES = cxxmph.cc + +hollow_iterator_test_SOURCES = hollow_iterator_test.cc + +seeded_hash_test_SOURCES = seeded_hash_test.cc +seeded_hash_test_LDADD = libcxxmph.la + +mph_bits_test_SOURCES = mph_bits_test.cc +mph_bits_test_LDADD = libcxxmph.la + +map_tester_test_SOURCES = map_tester.cc map_tester_test.cc + diff --git a/cxxmph/MurmurHash3.cpp b/cxxmph/MurmurHash3.cpp new file mode 100644 index 0000000..09ffb26 --- /dev/null +++ b/cxxmph/MurmurHash3.cpp @@ -0,0 +1,335 @@ +//----------------------------------------------------------------------------- +// MurmurHash3 was written by Austin Appleby, and is placed in the public +// domain. The author hereby disclaims copyright to this source code. + +// Note - The x86 and x64 versions do _not_ produce the same results, as the +// algorithms are optimized for their respective platforms. You can still +// compile and run any of them on any platform, but your performance with the +// non-native version will be less than optimal. + +#include "MurmurHash3.h" + +//----------------------------------------------------------------------------- +// Platform-specific functions and macros + +// Microsoft Visual Studio + +#if defined(_MSC_VER) + +#define FORCE_INLINE __forceinline + +#include + +#define ROTL32(x,y) _rotl(x,y) +#define ROTL64(x,y) _rotl64(x,y) + +#define BIG_CONSTANT(x) (x) + +// Other compilers + +#else // defined(_MSC_VER) + +#define FORCE_INLINE __attribute__((always_inline)) + +inline uint32_t rotl32 ( uint32_t x, int8_t r ) +{ + return (x << r) | (x >> (32 - r)); +} + +inline uint64_t rotl64 ( uint64_t x, int8_t r ) +{ + return (x << r) | (x >> (64 - r)); +} + +#define ROTL32(x,y) rotl32(x,y) +#define ROTL64(x,y) rotl64(x,y) + +#define BIG_CONSTANT(x) (x##LLU) + +#endif // !defined(_MSC_VER) + +//----------------------------------------------------------------------------- +// Block read - if your platform needs to do endian-swapping or can only +// handle aligned reads, do the conversion here + +FORCE_INLINE uint32_t getblock ( const uint32_t * p, int i ) +{ + return p[i]; +} + +FORCE_INLINE uint64_t getblock ( const uint64_t * p, int i ) +{ + return p[i]; +} + +//----------------------------------------------------------------------------- +// Finalization mix - force all bits of a hash block to avalanche + +FORCE_INLINE uint32_t fmix ( uint32_t h ) +{ + h ^= h >> 16; + h *= 0x85ebca6b; + h ^= h >> 13; + h *= 0xc2b2ae35; + h ^= h >> 16; + + return h; +} + +//---------- + +FORCE_INLINE uint64_t fmix ( uint64_t k ) +{ + k ^= k >> 33; + k *= BIG_CONSTANT(0xff51afd7ed558ccd); + k ^= k >> 33; + k *= BIG_CONSTANT(0xc4ceb9fe1a85ec53); + k ^= k >> 33; + + return k; +} + +//----------------------------------------------------------------------------- + +void MurmurHash3_x86_32 ( const void * key, int len, + uint32_t seed, void * out ) +{ + const uint8_t * data = (const uint8_t*)key; + const int nblocks = len / 4; + + uint32_t h1 = seed; + + uint32_t c1 = 0xcc9e2d51; + uint32_t c2 = 0x1b873593; + + //---------- + // body + + const uint32_t * blocks = (const uint32_t *)(data + nblocks*4); + + for(int i = -nblocks; i; i++) + { + uint32_t k1 = getblock(blocks,i); + + k1 *= c1; + k1 = ROTL32(k1,15); + k1 *= c2; + + h1 ^= k1; + h1 = ROTL32(h1,13); + h1 = h1*5+0xe6546b64; + } + + //---------- + // tail + + const uint8_t * tail = (const uint8_t*)(data + nblocks*4); + + uint32_t k1 = 0; + + switch(len & 3) + { + case 3: k1 ^= tail[2] << 16; + case 2: k1 ^= tail[1] << 8; + case 1: k1 ^= tail[0]; + k1 *= c1; k1 = ROTL32(k1,15); k1 *= c2; h1 ^= k1; + }; + + //---------- + // finalization + + h1 ^= len; + + h1 = fmix(h1); + + *(uint32_t*)out = h1; +} + +//----------------------------------------------------------------------------- + +void MurmurHash3_x86_128 ( const void * key, const int len, + uint32_t seed, void * out ) +{ + const uint8_t * data = (const uint8_t*)key; + const int nblocks = len / 16; + + uint32_t h1 = seed; + uint32_t h2 = seed; + uint32_t h3 = seed; + uint32_t h4 = seed; + + uint32_t c1 = 0x239b961b; + uint32_t c2 = 0xab0e9789; + uint32_t c3 = 0x38b34ae5; + uint32_t c4 = 0xa1e38b93; + + //---------- + // body + + const uint32_t * blocks = (const uint32_t *)(data + nblocks*16); + + for(int i = -nblocks; i; i++) + { + uint32_t k1 = getblock(blocks,i*4+0); + uint32_t k2 = getblock(blocks,i*4+1); + uint32_t k3 = getblock(blocks,i*4+2); + uint32_t k4 = getblock(blocks,i*4+3); + + k1 *= c1; k1 = ROTL32(k1,15); k1 *= c2; h1 ^= k1; + + h1 = ROTL32(h1,19); h1 += h2; h1 = h1*5+0x561ccd1b; + + k2 *= c2; k2 = ROTL32(k2,16); k2 *= c3; h2 ^= k2; + + h2 = ROTL32(h2,17); h2 += h3; h2 = h2*5+0x0bcaa747; + + k3 *= c3; k3 = ROTL32(k3,17); k3 *= c4; h3 ^= k3; + + h3 = ROTL32(h3,15); h3 += h4; h3 = h3*5+0x96cd1c35; + + k4 *= c4; k4 = ROTL32(k4,18); k4 *= c1; h4 ^= k4; + + h4 = ROTL32(h4,13); h4 += h1; h4 = h4*5+0x32ac3b17; + } + + //---------- + // tail + + const uint8_t * tail = (const uint8_t*)(data + nblocks*16); + + uint32_t k1 = 0; + uint32_t k2 = 0; + uint32_t k3 = 0; + uint32_t k4 = 0; + + switch(len & 15) + { + case 15: k4 ^= tail[14] << 16; + case 14: k4 ^= tail[13] << 8; + case 13: k4 ^= tail[12] << 0; + k4 *= c4; k4 = ROTL32(k4,18); k4 *= c1; h4 ^= k4; + + case 12: k3 ^= tail[11] << 24; + case 11: k3 ^= tail[10] << 16; + case 10: k3 ^= tail[ 9] << 8; + case 9: k3 ^= tail[ 8] << 0; + k3 *= c3; k3 = ROTL32(k3,17); k3 *= c4; h3 ^= k3; + + case 8: k2 ^= tail[ 7] << 24; + case 7: k2 ^= tail[ 6] << 16; + case 6: k2 ^= tail[ 5] << 8; + case 5: k2 ^= tail[ 4] << 0; + k2 *= c2; k2 = ROTL32(k2,16); k2 *= c3; h2 ^= k2; + + case 4: k1 ^= tail[ 3] << 24; + case 3: k1 ^= tail[ 2] << 16; + case 2: k1 ^= tail[ 1] << 8; + case 1: k1 ^= tail[ 0] << 0; + k1 *= c1; k1 = ROTL32(k1,15); k1 *= c2; h1 ^= k1; + }; + + //---------- + // finalization + + h1 ^= len; h2 ^= len; h3 ^= len; h4 ^= len; + + h1 += h2; h1 += h3; h1 += h4; + h2 += h1; h3 += h1; h4 += h1; + + h1 = fmix(h1); + h2 = fmix(h2); + h3 = fmix(h3); + h4 = fmix(h4); + + h1 += h2; h1 += h3; h1 += h4; + h2 += h1; h3 += h1; h4 += h1; + + ((uint32_t*)out)[0] = h1; + ((uint32_t*)out)[1] = h2; + ((uint32_t*)out)[2] = h3; + ((uint32_t*)out)[3] = h4; +} + +//----------------------------------------------------------------------------- + +void MurmurHash3_x64_128 ( const void * key, const int len, + const uint32_t seed, void * out ) +{ + const uint8_t * data = (const uint8_t*)key; + const int nblocks = len / 16; + + uint64_t h1 = seed; + uint64_t h2 = seed; + + uint64_t c1 = BIG_CONSTANT(0x87c37b91114253d5); + uint64_t c2 = BIG_CONSTANT(0x4cf5ad432745937f); + + //---------- + // body + + const uint64_t * blocks = (const uint64_t *)(data); + + for(int i = 0; i < nblocks; i++) + { + uint64_t k1 = getblock(blocks,i*2+0); + uint64_t k2 = getblock(blocks,i*2+1); + + k1 *= c1; k1 = ROTL64(k1,31); k1 *= c2; h1 ^= k1; + + h1 = ROTL64(h1,27); h1 += h2; h1 = h1*5+0x52dce729; + + k2 *= c2; k2 = ROTL64(k2,33); k2 *= c1; h2 ^= k2; + + h2 = ROTL64(h2,31); h2 += h1; h2 = h2*5+0x38495ab5; + } + + //---------- + // tail + + const uint8_t * tail = (const uint8_t*)(data + nblocks*16); + + uint64_t k1 = 0; + uint64_t k2 = 0; + + switch(len & 15) + { + case 15: k2 ^= uint64_t(tail[14]) << 48; + case 14: k2 ^= uint64_t(tail[13]) << 40; + case 13: k2 ^= uint64_t(tail[12]) << 32; + case 12: k2 ^= uint64_t(tail[11]) << 24; + case 11: k2 ^= uint64_t(tail[10]) << 16; + case 10: k2 ^= uint64_t(tail[ 9]) << 8; + case 9: k2 ^= uint64_t(tail[ 8]) << 0; + k2 *= c2; k2 = ROTL64(k2,33); k2 *= c1; h2 ^= k2; + + case 8: k1 ^= uint64_t(tail[ 7]) << 56; + case 7: k1 ^= uint64_t(tail[ 6]) << 48; + case 6: k1 ^= uint64_t(tail[ 5]) << 40; + case 5: k1 ^= uint64_t(tail[ 4]) << 32; + case 4: k1 ^= uint64_t(tail[ 3]) << 24; + case 3: k1 ^= uint64_t(tail[ 2]) << 16; + case 2: k1 ^= uint64_t(tail[ 1]) << 8; + case 1: k1 ^= uint64_t(tail[ 0]) << 0; + k1 *= c1; k1 = ROTL64(k1,31); k1 *= c2; h1 ^= k1; + }; + + //---------- + // finalization + + h1 ^= len; h2 ^= len; + + h1 += h2; + h2 += h1; + + h1 = fmix(h1); + h2 = fmix(h2); + + h1 += h2; + h2 += h1; + + ((uint64_t*)out)[0] = h1; + ((uint64_t*)out)[1] = h2; +} + +//----------------------------------------------------------------------------- + diff --git a/cxxmph/MurmurHash3.h b/cxxmph/MurmurHash3.h new file mode 100644 index 0000000..54e9d3f --- /dev/null +++ b/cxxmph/MurmurHash3.h @@ -0,0 +1,37 @@ +//----------------------------------------------------------------------------- +// MurmurHash3 was written by Austin Appleby, and is placed in the public +// domain. The author hereby disclaims copyright to this source code. + +#ifndef _MURMURHASH3_H_ +#define _MURMURHASH3_H_ + +//----------------------------------------------------------------------------- +// Platform-specific functions and macros + +// Microsoft Visual Studio + +#if defined(_MSC_VER) + +typedef unsigned char uint8_t; +typedef unsigned long uint32_t; +typedef unsigned __int64 uint64_t; + +// Other compilers + +#else // defined(_MSC_VER) + +#include + +#endif // !defined(_MSC_VER) + +//----------------------------------------------------------------------------- + +void MurmurHash3_x86_32 ( const void * key, int len, uint32_t seed, void * out ); + +void MurmurHash3_x86_128 ( const void * key, int len, uint32_t seed, void * out ); + +void MurmurHash3_x64_128 ( const void * key, int len, uint32_t seed, void * out ); + +//----------------------------------------------------------------------------- + +#endif // _MURMURHASH3_H_ diff --git a/cxxmph/benchmark.cc b/cxxmph/benchmark.cc new file mode 100644 index 0000000..1f260fa --- /dev/null +++ b/cxxmph/benchmark.cc @@ -0,0 +1,142 @@ +#include "benchmark.h" + +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +using std::cerr; +using std::cout; +using std::endl; +using std::setfill; +using std::setw; +using std::string; +using std::ostringstream; +using std::vector; + +namespace { + +/* Subtract the `struct timeval' values X and Y, + storing the result in RESULT. + Return 1 if the difference is negative, otherwise 0. */ +int timeval_subtract ( + struct timeval *result, struct timeval *x, struct timeval* y) { + /* Perform the carry for the later subtraction by updating y. */ + if (x->tv_usec < y->tv_usec) { + int nsec = (y->tv_usec - x->tv_usec) / 1000000 + 1; + y->tv_usec -= 1000000 * nsec; + y->tv_sec += nsec; + } + if (x->tv_usec - y->tv_usec > 1000000) { + int nsec = (x->tv_usec - y->tv_usec) / 1000000; + y->tv_usec += 1000000 * nsec; + y->tv_sec -= nsec; + } + + /* Compute the time remaining to wait. + tv_usec is certainly positive. */ + result->tv_sec = x->tv_sec - y->tv_sec; + result->tv_usec = x->tv_usec - y->tv_usec; + + /* Return 1 if result is negative. */ + return x->tv_sec < y->tv_sec; +} + +// C++ iostream is terrible for formatting. +string timeval_to_string(timeval tv) { + ostringstream out; + out << setfill(' ') << setw(3) << tv.tv_sec << '.'; + out << setfill('0') << setw(6) << tv.tv_usec; + return out.str(); +} + +struct rusage getrusage_or_die() { + struct rusage rs; + int ret = getrusage(RUSAGE_SELF, &rs); + if (ret != 0) { + cerr << "rusage failed: " << strerror(errno) << endl; + exit(-1); + } + return rs; +} + +struct timeval gettimeofday_or_die() { + struct timeval tv; + int ret = gettimeofday(&tv, NULL); + if (ret != 0) { + cerr << "gettimeofday failed: " << strerror(errno) << endl; + exit(-1); + } + return tv; +} + +#ifdef HAVE_CXA_DEMANGLE +string demangle(const string& name) { + char buf[1024]; + unsigned int size = 1024; + int status; + char* res = abi::__cxa_demangle( + name.c_str(), buf, &size, &status); + return res; +} +#else +string demangle(const string& name) { return name; } +#endif + + +static vector g_benchmarks; + +} // anonymous namespace + +namespace cxxmph { + +/* static */ void Benchmark::Register(Benchmark* bm) { + if (bm->name().empty()) { + string name = demangle(typeid(*bm).name()); + bm->set_name(name); + } + g_benchmarks.push_back(bm); +} + +/* static */ void Benchmark::RunAll() { + for (uint32_t i = 0; i < g_benchmarks.size(); ++i) { + std::auto_ptr bm(g_benchmarks[i]); + if (!bm->SetUp()) { + cerr << "Set up phase for benchmark " + << bm->name() << " failed." << endl; + continue; + } + bm->MeasureRun(); + bm->TearDown(); + } +} + +void Benchmark::MeasureRun() { + struct timeval walltime_begin = gettimeofday_or_die(); + struct rusage begin = getrusage_or_die(); + Run(); + struct rusage end = getrusage_or_die(); + struct timeval walltime_end = gettimeofday_or_die(); + + struct timeval utime; + timeval_subtract(&utime, &end.ru_utime, &begin.ru_utime); + struct timeval stime; + timeval_subtract(&stime, &end.ru_stime, &begin.ru_stime); + struct timeval wtime; + timeval_subtract(&wtime, &walltime_end, &walltime_begin); + + cout << "Benchmark: " << name_ << endl; + cout << "CPU User time : " << timeval_to_string(utime) << endl; + cout << "CPU System time: " << timeval_to_string(stime) << endl; + cout << "Wall clock time: " << timeval_to_string(wtime) << endl; + cout << endl; +} + +} // namespace cxxmph diff --git a/cxxmph/benchmark.h b/cxxmph/benchmark.h new file mode 100644 index 0000000..cecbc2f --- /dev/null +++ b/cxxmph/benchmark.h @@ -0,0 +1,32 @@ +#ifndef __CXXMPH_BENCHMARK_H__ +#define __CXXMPH_BENCHMARK_H__ + +#include +#include + +namespace cxxmph { + +class Benchmark { + public: + Benchmark() {} + virtual ~Benchmark() {} + + const std::string& name() { return name_; } + void set_name(const std::string& name) { name_ = name; } + + static void Register(Benchmark* bm); + static void RunAll(); + + protected: + virtual bool SetUp() { return true; }; + virtual void Run() = 0; + virtual bool TearDown() { return true; }; + + private: + std::string name_; + void MeasureRun(); +}; + +} // namespace cxxmph + +#endif diff --git a/cxxmph/bm_common.cc b/cxxmph/bm_common.cc new file mode 100644 index 0000000..1baaa09 --- /dev/null +++ b/cxxmph/bm_common.cc @@ -0,0 +1,75 @@ +#include +#include +#include +#include +#include + +#include "bm_common.h" + +using std::cerr; +using std::endl; +using std::set; +using std::string; +using std::vector; + +namespace cxxmph { + +UrlsBenchmark::~UrlsBenchmark() {} +bool UrlsBenchmark::SetUp() { + vector urls; + std::ifstream f(urls_file_.c_str()); + if (!f.is_open()) { + cerr << "Failed to open urls file " << urls_file_ << endl; + return false; + } + string buffer; + while(std::getline(f, buffer)) urls.push_back(buffer); + set unique(urls.begin(), urls.end()); + if (unique.size() != urls.size()) { + cerr << "Input file has repeated keys." << endl; + return false; + } + urls.swap(urls_); + return true; +} + +SearchUrlsBenchmark::~SearchUrlsBenchmark() {} +bool SearchUrlsBenchmark::SetUp() { + if (!UrlsBenchmark::SetUp()) return false; + int32_t miss_ratio_int32 = std::numeric_limits::max() * miss_ratio_; + forced_miss_urls_.resize(nsearches_); + random_.resize(nsearches_); + for (uint32_t i = 0; i < nsearches_; ++i) { + random_[i] = urls_[random() % urls_.size()]; + if (random() < miss_ratio_int32) { + forced_miss_urls_[i] = random_[i].as_string() + ".force_miss"; + random_[i] = forced_miss_urls_[i]; + } + } + return true; +} + +Uint64Benchmark::~Uint64Benchmark() {} +bool Uint64Benchmark::SetUp() { + set unique; + for (uint32_t i = 0; i < count_; ++i) { + uint64_t v; + do { v = random(); } while (unique.find(v) != unique.end()); + values_.push_back(v); + unique.insert(v); + } + return true; +} + +SearchUint64Benchmark::~SearchUint64Benchmark() {} +bool SearchUint64Benchmark::SetUp() { + if (!Uint64Benchmark::SetUp()) return false; + random_.resize(nsearches_); + for (uint32_t i = 0; i < nsearches_; ++i) { + uint32_t pos = random() % values_.size(); + random_[i] = values_[pos]; + } + return true; +} + +} // namespace cxxmph diff --git a/cxxmph/bm_common.h b/cxxmph/bm_common.h new file mode 100644 index 0000000..178ee36 --- /dev/null +++ b/cxxmph/bm_common.h @@ -0,0 +1,73 @@ +#ifndef __CXXMPH_BM_COMMON_H__ +#define __CXXMPH_BM_COMMON_H__ + +#include "stringpiece.h" + +#include +#include +#include // std::hash +#include "MurmurHash3.h" + +#include "benchmark.h" + +namespace std { +template <> struct hash { + uint32_t operator()(const cxxmph::StringPiece& k) const { + uint32_t out; + MurmurHash3_x86_32(k.data(), k.length(), 1, &out); + return out; + } +}; +} // namespace std + +namespace cxxmph { + +class UrlsBenchmark : public Benchmark { + public: + UrlsBenchmark(const std::string& urls_file) : urls_file_(urls_file) { } + virtual ~UrlsBenchmark(); + protected: + virtual bool SetUp(); + const std::string urls_file_; + std::vector urls_; +}; + +class SearchUrlsBenchmark : public UrlsBenchmark { + public: + SearchUrlsBenchmark(const std::string& urls_file, uint32_t nsearches, float miss_ratio) + : UrlsBenchmark(urls_file), nsearches_(nsearches), miss_ratio_(miss_ratio) {} + virtual ~SearchUrlsBenchmark(); + protected: + virtual bool SetUp(); + const uint32_t nsearches_; + float miss_ratio_; + std::vector forced_miss_urls_; + std::vector random_; +}; + +class Uint64Benchmark : public Benchmark { + public: + Uint64Benchmark(uint32_t count) : count_(count) { } + virtual ~Uint64Benchmark(); + virtual void Run() {} + protected: + virtual bool SetUp(); + const uint32_t count_; + std::vector values_; +}; + +class SearchUint64Benchmark : public Uint64Benchmark { + public: + SearchUint64Benchmark(uint32_t count, uint32_t nsearches) + : Uint64Benchmark(count), nsearches_(nsearches) { } + virtual ~SearchUint64Benchmark(); + virtual void Run() {}; + protected: + virtual bool SetUp(); + const uint32_t nsearches_; + std::vector random_; +}; + +} // namespace cxxmph + +#endif // __CXXMPH_BM_COMMON_H__ diff --git a/cxxmph/bm_index.cc b/cxxmph/bm_index.cc new file mode 100644 index 0000000..d1cbc00 --- /dev/null +++ b/cxxmph/bm_index.cc @@ -0,0 +1,143 @@ +#include + +#include +#include +#include +#include + +#include "bm_common.h" +#include "stringpiece.h" +#include "mph_index.h" + +using namespace cxxmph; + +using std::string; +using std::unordered_map; + +class BM_MPHIndexCreate : public UrlsBenchmark { + public: + BM_MPHIndexCreate(const std::string& urls_file) + : UrlsBenchmark(urls_file) { } + protected: + virtual void Run() { + SimpleMPHIndex index; + index.Reset(urls_.begin(), urls_.end(), urls_.size()); + } +}; + +class BM_STLIndexCreate : public UrlsBenchmark { + public: + BM_STLIndexCreate(const std::string& urls_file) + : UrlsBenchmark(urls_file) { } + protected: + virtual void Run() { + unordered_map index; + int idx = 0; + for (auto it = urls_.begin(); it != urls_.end(); ++it) { + index.insert(make_pair(*it, idx++)); + } + } +}; + +class BM_MPHIndexSearch : public SearchUrlsBenchmark { + public: + BM_MPHIndexSearch(const std::string& urls_file, int nsearches) + : SearchUrlsBenchmark(urls_file, nsearches, 0) { } + virtual void Run() { + for (auto it = random_.begin(); it != random_.end(); ++it) { + auto idx = index_.index(*it); + // Collision check to be fair with STL + if (strcmp(urls_[idx].c_str(), it->data()) != 0) idx = -1; + } + } + protected: + virtual bool SetUp () { + if (!SearchUrlsBenchmark::SetUp()) return false; + index_.Reset(urls_.begin(), urls_.end(), urls_.size()); + return true; + } + SimpleMPHIndex index_; +}; + +class BM_CmphIndexSearch : public SearchUrlsBenchmark { + public: + BM_CmphIndexSearch(const std::string& urls_file, int nsearches) + : SearchUrlsBenchmark(urls_file, nsearches, 0) { } + ~BM_CmphIndexSearch() { if (index_) cmph_destroy(index_); } + virtual void Run() { + for (auto it = random_.begin(); it != random_.end(); ++it) { + auto idx = cmph_search(index_, it->data(), it->length()); + // Collision check to be fair with STL + if (strcmp(urls_[idx].c_str(), it->data()) != 0) idx = -1; + } + } + protected: + virtual bool SetUp() { + if (!SearchUrlsBenchmark::SetUp()) { + cerr << "Parent class setup failed." << endl; + return false; + } + FILE* f = fopen(urls_file_.c_str(), "r"); + if (!f) { + cerr << "Faied to open " << urls_file_ << endl; + return false; + } + cmph_io_adapter_t* source = cmph_io_nlfile_adapter(f); + if (!source) { + cerr << "Faied to create io adapter for " << urls_file_ << endl; + return false; + } + cmph_config_t* config = cmph_config_new(source); + if (!config) { + cerr << "Failed to create config" << endl; + return false; + } + cmph_config_set_algo(config, CMPH_BDZ); + cmph_t* mphf = cmph_new(config); + if (!mphf) { + cerr << "Failed to create mphf." << endl; + return false; + } + + cmph_config_destroy(config); + cmph_io_nlfile_adapter_destroy(source); + fclose(f); + index_ = mphf; + return true; + } + cmph_t* index_; +}; + + +class BM_STLIndexSearch : public SearchUrlsBenchmark { + public: + BM_STLIndexSearch(const std::string& urls_file, int nsearches) + : SearchUrlsBenchmark(urls_file, nsearches, 0) { } + virtual void Run() { + for (auto it = random_.begin(); it != random_.end(); ++it) { + auto idx = index_.find(*it); + } + } + protected: + virtual bool SetUp () { + if (!SearchUrlsBenchmark::SetUp()) return false; + unordered_map index; + int idx = 0; + for (auto it = urls_.begin(); it != urls_.end(); ++it) { + index.insert(make_pair(*it, idx++)); + } + index.swap(index_); + return true; + } + unordered_map index_; +}; + +int main(int argc, char** argv) { + Benchmark::Register(new BM_MPHIndexCreate("URLS100k")); + Benchmark::Register(new BM_STLIndexCreate("URLS100k")); + Benchmark::Register(new BM_MPHIndexSearch("URLS100k", 10*1000*1000)); + Benchmark::Register(new BM_STLIndexSearch("URLS100k", 10*1000*1000)); + Benchmark::Register(new BM_CmphIndexSearch("URLS100k", 10*1000*1000)); + Benchmark::RunAll(); + return 0; +} diff --git a/cxxmph/bm_map.cc b/cxxmph/bm_map.cc new file mode 100644 index 0000000..115f7f8 --- /dev/null +++ b/cxxmph/bm_map.cc @@ -0,0 +1,112 @@ +#include +#include + +#include "bm_common.h" +#include "mph_map.h" + +using cxxmph::mph_map; +using std::string; +using std::unordered_map; + +// Another reference benchmark: +// http://blog.aggregateknowledge.com/tag/bigmemory/ + +namespace cxxmph { + + +template +const T* myfind(const MapType& mymap, const T& k) { + auto it = mymap.find(k); + auto end = mymap.end(); + if (it == end) return NULL; + return &it->second; +} + +template +class BM_CreateUrls : public UrlsBenchmark { + public: + BM_CreateUrls(const string& urls_file) : UrlsBenchmark(urls_file) { } + virtual void Run() { + MapType mymap; + for (auto it = urls_.begin(); it != urls_.end(); ++it) { + mymap[*it] = *it; + } + } +}; + +template +class BM_SearchUrls : public SearchUrlsBenchmark { + public: + BM_SearchUrls(const std::string& urls_file, int nsearches, float miss_ratio) + : SearchUrlsBenchmark(urls_file, nsearches, miss_ratio) { } + virtual ~BM_SearchUrls() {} + virtual void Run() { + uint32_t total = 1; + for (auto it = random_.begin(); it != random_.end(); ++it) { + auto v = myfind(mymap_, *it); + if (v) total += v->length(); + } + fprintf(stderr, "Total: %u\n", total); + } + protected: + virtual bool SetUp() { + if (!SearchUrlsBenchmark::SetUp()) return false; + for (auto it = urls_.begin(); it != urls_.end(); ++it) { + mymap_[*it] = *it; + } + mymap_.rehash(mymap_.bucket_count()); + fprintf(stderr, "Occupation: %f\n", static_cast(mymap_.size())/mymap_.bucket_count()); + return true; + } + MapType mymap_; +}; + +template +class BM_SearchUint64 : public SearchUint64Benchmark { + public: + BM_SearchUint64() : SearchUint64Benchmark(100000, 10*1000*1000) { } + virtual bool SetUp() { + if (!SearchUint64Benchmark::SetUp()) return false; + for (uint32_t i = 0; i < values_.size(); ++i) { + mymap_[values_[i]] = values_[i]; + } + mymap_.rehash(mymap_.bucket_count()); + // Double check if everything is all right + cerr << "Doing double check" << endl; + for (uint32_t i = 0; i < values_.size(); ++i) { + if (mymap_[values_[i]] != values_[i]) { + cerr << "Looking for " << i << " th key value " << values_[i]; + cerr << " yielded " << mymap_[values_[i]] << endl; + return false; + } + } + return true; + } + virtual void Run() { + for (auto it = random_.begin(); it != random_.end(); ++it) { + auto v = myfind(mymap_, *it); + if (*v != *it) { + cerr << "Looked for " << *it << " got " << *v << endl; + exit(-1); + } + } + } + MapType mymap_; +}; + +} // namespace cxxmph + +using namespace cxxmph; + +int main(int argc, char** argv) { + srandom(4); + Benchmark::Register(new BM_CreateUrls>("URLS100k")); + Benchmark::Register(new BM_CreateUrls>("URLS100k")); + Benchmark::Register(new BM_SearchUrls>("URLS100k", 10*1000 * 1000, 0)); + Benchmark::Register(new BM_SearchUrls>("URLS100k", 10*1000 * 1000, 0)); + Benchmark::Register(new BM_SearchUrls>("URLS100k", 10*1000 * 1000, 0.9)); + Benchmark::Register(new BM_SearchUrls>("URLS100k", 10*1000 * 1000, 0.9)); + Benchmark::Register(new BM_SearchUint64>); + Benchmark::Register(new BM_SearchUint64>); + Benchmark::RunAll(); +} diff --git a/cxxmph/cxxmph.cc b/cxxmph/cxxmph.cc new file mode 100644 index 0000000..b544acd --- /dev/null +++ b/cxxmph/cxxmph.cc @@ -0,0 +1,70 @@ +// Copyright 2010 Google Inc. All Rights Reserved. +// Author: davi@google.com (Davi Reis) + +#include + +#include +#include +#include +#include + +#include "mph_map.h" +#include "config.h" + +using std::cerr; +using std::cout; +using std::endl; +using std::getline; +using std::ifstream; +using std::string; +using std::vector; + +using cxxmph::mph_map; + +void usage(const char* prg) { + cerr << "usage: " << prg << " [-v] [-h] [-V] " << endl; +} +void usage_long(const char* prg) { + usage(prg); + cerr << " -h\t print this help message" << endl; + cerr << " -V\t print version number and exit" << endl; + cerr << " -v\t increase verbosity (may be used multiple times)" << endl; +} + +int main(int argc, char** argv) { + + int verbosity = 0; + while (1) { + char ch = (char)getopt(argc, argv, "hv"); + if (ch == -1) break; + switch (ch) { + case 'h': + usage_long(argv[0]); + return 0; + case 'V': + std::cout << VERSION << std::endl; + return 0; + case 'v': + ++verbosity; + break; + } + } + if (optind != argc - 1) { + usage(argv[0]); + return 1; + } + vector keys; + ifstream f(argv[optind]); + string buffer; + while (!getline(f, buffer).eof()) keys.push_back(buffer); + for (uint32_t i = 0; i < keys.size(); ++i) string s = keys[i]; + mph_map table; + + for (uint32_t i = 0; i < keys.size(); ++i) table[keys[i]] = keys[i]; + mph_map::const_iterator it = table.begin(); + mph_map::const_iterator end = table.end(); + for (int i = 0; it != end; ++it, ++i) { + cout << i << ": " << it->first + <<" -> " << it->second << endl; + } +} diff --git a/cxxmph/hollow_iterator.h b/cxxmph/hollow_iterator.h new file mode 100644 index 0000000..54fba74 --- /dev/null +++ b/cxxmph/hollow_iterator.h @@ -0,0 +1,81 @@ +#ifndef __CXXMPH_HOLLOW_ITERATOR_H__ +#define __CXXMPH_HOLLOW_ITERATOR_H__ + +#include + +namespace cxxmph { + +using std::vector; + +template +struct is_empty { + public: + is_empty() : c_(NULL), p_(NULL) {}; + is_empty(const container_type* c, const vector* p) : c_(c), p_(p) {}; + bool operator()(typename container_type::const_iterator it) const { + if (it == c_->end()) return false; + return !(*p_)[it - c_->begin()]; + } + private: + const container_type* c_; + const vector* p_; +}; + +template +struct hollow_iterator_base + : public std::iterator { + public: + typedef hollow_iterator_base& self_type; + typedef self_type& self_reference; + typedef typename iterator::reference reference; + typedef typename iterator::pointer pointer; + inline hollow_iterator_base() : it_(), empty_() { } + inline hollow_iterator_base(iterator it, is_empty empty, bool solid) : it_(it), empty_(empty) { + if (!solid) advance(); + } + // Same as above, assumes solid==true. + inline hollow_iterator_base(iterator it, is_empty empty) : it_(it), empty_(empty) {} + inline hollow_iterator_base(const self_type& rhs) { it_ = rhs.it_; empty_ = rhs.empty_; } + template + hollow_iterator_base(const hollow_iterator_base& rhs) { it_ = rhs.it_; empty_ = rhs.empty_; } + + reference operator*() { return *it_; } + pointer operator->() { return &(*it_); } + self_reference operator++() { ++it_; advance(); return *this; } + // self_type operator++() { auto tmp(*this); ++tmp; return tmp; } + + template + bool operator==(const hollow_iterator_base& rhs) { return rhs.it_ == it_; } + template + bool operator!=(const hollow_iterator_base& rhs) { return rhs.it_ != it_; } + + // should be friend + iterator it_; + is_empty empty_; + + private: + void advance() { + while (empty_(it_)) ++it_; + } +}; + +template +inline auto make_solid( + container_type* v, const vector* p, iterator it) -> + hollow_iterator_base> { + return hollow_iterator_base>( + it, is_empty(v, p)); +} + +template +inline auto make_hollow( + container_type* v, const vector* p, iterator it) -> + hollow_iterator_base> { + return hollow_iterator_base>( + it, is_empty(v, p), false); +} + +} // namespace cxxmph + +#endif // __CXXMPH_HOLLOW_ITERATOR_H__ diff --git a/cxxmph/hollow_iterator_test.cc b/cxxmph/hollow_iterator_test.cc new file mode 100644 index 0000000..de235c0 --- /dev/null +++ b/cxxmph/hollow_iterator_test.cc @@ -0,0 +1,49 @@ +#include +#include +#include +#include + + +using std::cerr; +using std::endl; +using std::vector; +#include "hollow_iterator.h" +using cxxmph::hollow_iterator_base; +using cxxmph::make_hollow; +using cxxmph::is_empty; + +int main(int argc, char** argv) { + vector v; + vector p; + for (int i = 0; i < 100; ++i) { + v.push_back(i); + p.push_back(i % 2 == 0); + } + auto begin = make_hollow(&v, &p, v.begin()); + auto end = make_hollow(&v, &p, v.end()); + for (auto it = begin; it != end; ++it) { + if (((*it) % 2) != 0) exit(-1); + } + const vector* cv(&v); + auto cbegin(make_hollow(cv, &p, cv->begin())); + auto cend(make_hollow(cv, &p, cv->begin())); + for (auto it = cbegin; it != cend; ++it) { + if (((*it) % 2) != 0) exit(-1); + } + const vector* cp(&p); + cbegin = make_hollow(cv, cp, v.begin()); + cend = make_hollow(cv, cp, cv->end()); + + vector::iterator vit1 = v.begin(); + vector::const_iterator vit2 = v.begin(); + if (vit1 != vit2) exit(-1); + auto it1 = make_hollow(&v, &p, vit1); + auto it2 = make_hollow(&v, &p, vit2); + if (it1 != it2) exit(-1); + + typedef is_empty> iev; + hollow_iterator_base::iterator, iev> default_constructed; + default_constructed = make_hollow(&v, &p, v.begin()); + return 0; +} + diff --git a/cxxmph/map_tester.cc b/cxxmph/map_tester.cc new file mode 100644 index 0000000..fdae9d1 --- /dev/null +++ b/cxxmph/map_tester.cc @@ -0,0 +1,8 @@ +#include "map_tester.h" + +namespace cxxxmph { + +MapTester::MapTester() {} +MapTester::~MapTester() {} + +} diff --git a/cxxmph/map_tester.h b/cxxmph/map_tester.h new file mode 100644 index 0000000..7a2dd4a --- /dev/null +++ b/cxxmph/map_tester.h @@ -0,0 +1,85 @@ +#ifndef __CXXMPH_MAP_TEST_HELPER_H__ +#define __CXXMPH_MAP_TEST_HELPER_H__ + +#include +#include +#include +#include +#include + +namespace cxxmph { + +using namespace std; + +// template