View previous topic :: View next topic |
Author |
Message |
Lokey
Joined: 02 Jan 2005 Posts: 158
|
Posted: Thu Jan 06, 2005 2:58 Post subject: |
|
|
First ditto on monezz's post, using that method as well.
Also agree with JeroenB above that the differences in time to execute between local or NWNx database is probably minimal.
Timendainum wrote: | Lokey, to expand your question. Could there be a way to benchmark database queries?
We can use the profiler to benchmark script running times, but it would prove interesting to have a DB benchmark as well. |
Start here? http://dev.mysql.com/doc/mysql/en/MySQL_Optimization.html
There's a built-in benchmark() function in MySQL as well, but not that useful since you have ODBC, NWNx and NWN to negotiate (plus NWN chewing up memory lol). We can load in hundreds of individual data sets or blobs that might be 1000s of characters without a noticeable in-game hitch...but working toward using hashsets over local on the module or object in game at this point.
Have you tried NWNx Profiler by the way? You could log timestamps at script begin/end, but the game clock still does some funky things, not sure if it'll work at all or go haywire when NWN is working hard and starts deciding not to execute some processes. |
|
Back to top |
|
|
Undertowe
Joined: 03 Jan 2005 Posts: 33
|
Posted: Thu Jan 06, 2005 5:01 Post subject: |
|
|
Monezz, the approach you described sounds essentially the same as my approach. Except, I hadn't decided on a way to determine which variables to read from the PC object and store in the DB during a "save point."
As for your solution to that problem: consulting what is currently stored in the DB to determine what to look for on the PC, I agree that this is one way to do it. But, it sure does seem like an awful lot of reading from the DB. If we take into consideration that you would already be reading those values from the DB OnClientEnter, and then again during each "save point", and also OnClientLeave, the increased efficiency of such an approach becomes more and more dubious, at least compared to "real-time" reading/writing to the DB.
Well, on one hand we have Monezz and Lokey taking the approach of "caching" variables on the PC object (I knew you guys were out there somewhere!). On the other hand, we have JeroenB insisting that we are better off bypassing that step completely, because of increased overhead, and such.
Wow, I'm still not sure what the most efficient approach is. Is there some quantitative data out there somewhere?
Lokey, I'm curious: Why are you using the approach that Monezz described if you agree with JeroenB that the differences in execution time between accessing local and DB data is minimal? Did you start off thinking one way, and then change your mind?
Monezz, you say that for a given set of records, deleting them all and inserting them anew is, "easier, cleaner and faster" than updating them. This seems very counter-intuitive to me. Could you please elaborate? |
|
Back to top |
|
|
monezz
Joined: 03 Jan 2005 Posts: 52 Location: The Netherlands
|
Posted: Thu Jan 06, 2005 11:48 Post subject: |
|
|
insert/delete vs updates:
easier:
You don't have to worry about the existing values in the database. Only the values stored on the character are stored in the database. So its very easy to control youre persitent parameters through the local variables.
cleaner:
When you decide you don't need a parameter for a character anymore, it will be deleted from the database. So you always have only acurate parameters in youre db.
(I had some experiences with MyIsam tables creating overhead on update statements, but I haven't done any research on it.)
faster
Update statements require more extensive locking. I'm not sure how it affects MyIsam tables since tables of that type only can be locked on table level. I have done large experiments and benchmarks with updates vs delete/inserts on oracle. Oracle uses more or less the same locking aproach as the innoDB table type in MySQL. The delete/insert aproach was in some situations more than 10 times faster!
Monezz. |
|
Back to top |
|
|
monezz
Joined: 03 Jan 2005 Posts: 52 Location: The Netherlands
|
Posted: Thu Jan 06, 2005 11:51 Post subject: |
|
|
Lokey wrote: | You could log timestamps at script begin/end. |
Unfortunately timestamps won't change during script execution. |
|
Back to top |
|
|
Papillon x-man
Joined: 28 Dec 2004 Posts: 1060 Location: Germany
|
Posted: Thu Jan 06, 2005 12:44 Post subject: |
|
|
If we would release a little module that allows to accurately time things, would someone care to enlighten us with some data on the various approaches discussed here ?
In Avlis, I used approach 3. There's an include file aps_cache, that transparently caches read and writes. It's basically a write-through cache. I have a feeling that this approach is the most balanced one, without actually beeing able to rely on any valid performance data. Just a feeling. _________________ Papillon |
|
Back to top |
|
|
monezz
Joined: 03 Jan 2005 Posts: 52 Location: The Netherlands
|
Posted: Thu Jan 06, 2005 13:20 Post subject: |
|
|
With a timing module, I could set up a test environment.
I have a nwn server and seperate MySQL server availiable. |
|
Back to top |
|
|
Lokey
Joined: 02 Jan 2005 Posts: 158
|
Posted: Fri Jan 07, 2005 4:40 Post subject: |
|
|
Undertowe wrote: | Well, on one hand we have Monezz and Lokey taking the approach of "caching" variables on the PC object (I knew you guys were out there somewhere!). On the other hand, we have JeroenB insisting that we are better off bypassing that step completely, because of increased overhead, and such.
Wow, I'm still not sure what the most efficient approach is. Is there some quantitative data out there somewhere?
Lokey, I'm curious: Why are you using the approach that Monezz described if you agree with JeroenB that the differences in execution time between accessing local and DB data is minimal? Did you start off thinking one way, and then change your mind? |
Sorry been on a UT kick lately, so not entirely up to speed on things NWN at the moment. Anyway first response was more sharing the rigors we've put NWNx/MySQL through (I'm a hero story to some extent ), second was about most external db apps being efficient/robust enough not to worry too much about stressing them.
Maybe I didn't exercise enough reading skill on Monezz's post at the top of the page by reading too much into the first line: "To avoid problems and inconsistencies with crashes, I always store the character and the persistant data at the same time."
Anyway, we use a combination of options 3 and 5 on the various Neversummer incarnations (data stored locally whether NWNx hash or NWN if called often always updated locally and in db immediately if the data changes). Don't have much quantitative for you though, just plenty of qualitative.
In general, don't think that there's much to worry about efficiency-wise but it will depend on what you're trying to do (what size mod(s) you're running for how many people on what kind of hard/software and bandwidth). More important is setting up a system you're comfortable using. _________________ Neversummer PW NWNx powered mayhem |
|
Back to top |
|
|
Papillon x-man
Joined: 28 Dec 2004 Posts: 1060 Location: Germany
|
Posted: Thu Jan 13, 2005 0:29 Post subject: |
|
|
I have put up a timer plugin on the download page, so test away ! _________________ Papillon |
|
Back to top |
|
|
Asmodae
Joined: 07 Jan 2005 Posts: 55
|
Posted: Fri Jan 14, 2005 9:40 Post subject: |
|
|
With performance consideration in mind, recall that MySQL is a disk based system, so reads/writes will never be faster than your HDD's seak/access/read times. MySQL does a world of wonders to optimize that, but keep that in mind. Executions in RAM (ie local vars and such) will be faster as a matter of course as long as its within reason. I did a test with several thousand local variables not long ago looking for memory leaks.. and found that anything past 2000 was a visually NOTICEABLE performance difference when accessing a local variable on an object. When I say noticeable I mean watching output messages. So the actual limit you want for performance is FAR less, I would say make sure you have less than 100 on any given object as an absolute maximum.
A balance must be struck. Optimized and normalized tables are a great way to tune up your database, combine your queries into one rather than several calls, make use of indexes and combined primary keys for maximum efficiency. So rather than do a dozen reads/writes to various database variables in seperate calls, it is better to combine them to one SQL statement so that MySQL can use its internal optimizations.
Also in some of the cases listed you mention Many-to-Many style relationships which can be handled a number of ways, but the best way is a linker table. For example: in Exodus we use custom quests, so there's a table for those, and a table to store common information about characters. Since one character can do many quests, and one quest can be done by many characters there's a linker table in the middle with a primary key that is a combination of the characterID and the QuestID. Each data pair is a unique table entry, and since its a combine primary key it is highly optimized for searches and joins, etc.
After a lot of research I found that's the best way to do the kind of data relation you describe inside MySQL. _________________ Nepenthe - An NWN2 Persistant World, coming to a planet near you. http://www.nepentheonline.com |
|
Back to top |
|
|
Papillon x-man
Joined: 28 Dec 2004 Posts: 1060 Location: Germany
|
Posted: Fri Jan 14, 2005 14:07 Post subject: |
|
|
Btw, did anybody test MySQL vs. ODBC connection performance ? I suspect switching from ODBC to MySQL might not be the big improvement some people ( ) have been preaching all the time... at least my scorco tests showed virtually no difference when dealing with objects. _________________ Papillon |
|
Back to top |
|
|
weldieran
Joined: 05 Aug 2005 Posts: 71
|
Posted: Fri Aug 05, 2005 21:50 Post subject: |
|
|
How is it that you would loop through and save to DB onClientExit?
It was my understanding that very little is available onClientExit - GetName namely.
Are variables stored on the PC Exiting still accessible? But inventory is not right? So variables stored on inventory items is not accessible? |
|
Back to top |
|
|
Alosynth
Joined: 06 Jan 2005 Posts: 24
|
Posted: Sun Aug 07, 2005 21:06 Post subject: |
|
|
The PC object still exists after the player leaves. However, the GetPlayer* functions don't work to retreive player information from the object because the player is detached from the object.
Theres some more detailed information on what happens with On Client Leave here |
|
Back to top |
|
|
|