> > On Wed, 10 Apr 1996, Hades wrote: > > Oh I see what you've done... okay, you beat me on speed there... a > hash table is an array of linked lists. Say I want to index room 1031 and > the size of my table is 727 (prime and far away from powers of 2 are good > things in a number), then I call a function that looks at > world_hash_table[1031 % 727], and if this isn't the room I'm looking for > then it looks at world_hash_table[1031 % 727]->next_hashed and it > continues until the function finds the right room or NULL is returned. > The idea is that there wouldn't be more than 2 or 3 steps on average to > find the room, which is much better than averaging 2000 steps as in a > linear search. > > Only thing I disagree with your system is that if you get to the > point where you start having huge numbers for your vnumbers, your array > size will be really big. I personally prefer the hash table so I at > least have control over that. Well I admint, yeah it's huge. Why? Because at startup I initalize teh array for 32768 records, the max # of rooms. But it's an array of pointers, so the memory it uses isn't overly huge, like 120k if I am right in that a pointer to a structure is 4 bytes of data, but I might be wrong there. I figure 120k of memory is nothing compared to the speed and cpu time it would save me. That hash-table idea ia cool too, I'm gonna look into it. Hades
This archive was generated by hypermail 2b30 : 12/18/00 PST