View previous topic :: View next topic |
Author |
Message |
Rami_Ahmed
Joined: 07 Dec 2005 Posts: 37 Location: Denmark
|
Posted: Wed Jun 28, 2006 12:30 Post subject: NWNx2 not saving with SQLite? |
|
|
Hi!
Our server has started to crash a lot, which is not good of course and we are looking at why this happens, but in the meantime...
Whenever our server crash, any datas saved with NWNx2 ODBC is not saved.
We have the latest NWNx2 using SQLite. And host machine is Windows 98.
Is there any way to force NWNx2 to save everything? In our log it dosnt say "NWNx2 shutting down" when the server crashes, so I guess that's why it dosn't save. But is there any fix to this?
Hope I'm understandable. |
|
Back to top |
|
|
Acrodania
Joined: 02 Jan 2005 Posts: 208
|
Posted: Wed Jun 28, 2006 18:05 Post subject: |
|
|
SQLite uses transactions, it might not be processing those transactions.
You can force it to close the transaction then open a new transaction periodically, maybe every 10-20 heartbeats. That should take care of that issue. However, you really need to figure out what is crashing the system because if it has anything to do with NWNX it might not be saving because of something else... |
|
Back to top |
|
|
Rami_Ahmed
Joined: 07 Dec 2005 Posts: 37 Location: Denmark
|
Posted: Thu Jun 29, 2006 11:49 Post subject: |
|
|
How can I force it to close transactions? I don' t know anything about SQLite or MySQL, only a bit of Nwscript. And it's because the host dosn't have enough mb ram probably, but we are fixing that asap. |
|
Back to top |
|
|
Papillon x-man
Joined: 28 Dec 2004 Posts: 1060 Location: Germany
|
Posted: Thu Jun 29, 2006 12:05 Post subject: |
|
|
From http://nwnx.org/index.php?id=doc_odbc2:
Quote: | Advanced note: After opening the database file, an implicit transaction is started automatically, since SQLite is significantly faster when access to the database is happening inside a transaction. If you want to access the database concurrently, or if you want to handle transactions yourself, issue a COMMIT right after the call to SQLInit to commit the implicit transaction.
|
So in the heartbeat script, do the following:
Code: |
SQLExecDirect("COMMIT TRANSACTION");
SQLExecDirect("BEGIN TRANSACTION");
|
This closes the currently active transaction and immediately starts a new one. You might have to experiment with the interval. Make it too large and you will notice periodic hick-ups (when the data gets written to disk), make it to small and you will loose performance. I'd suggest 30 seconds to begin with.
Note: I am not sure if a heartbeat script is the best solution here... I seem to remember they are skipped if the server load is high. Maybe DelayCommand(Executescript()) is better in this case. YMMV. _________________ Papillon |
|
Back to top |
|
|
Rami_Ahmed
Joined: 07 Dec 2005 Posts: 37 Location: Denmark
|
Posted: Thu Jun 29, 2006 19:44 Post subject: |
|
|
Okay. Thank you very much! We'll try this out.
And sorry about not knowing how to fix the transaction thing, the quote you posted I don't understand any bit of. But thank you! |
|
Back to top |
|
|
slappy
Joined: 16 Dec 2013 Posts: 9
|
Posted: Wed Dec 18, 2013 17:58 Post subject: |
|
|
Papillon wrote: | From http://nwnx.org/index.php?id=doc_odbc2:
Quote: | Advanced note: After opening the database file, an implicit transaction is started automatically, since SQLite is significantly faster when access to the database is happening inside a transaction. If you want to access the database concurrently, or if you want to handle transactions yourself, issue a COMMIT right after the call to SQLInit to commit the implicit transaction.
|
So in the heartbeat script, do the following:
Code: |
SQLExecDirect("COMMIT TRANSACTION");
SQLExecDirect("BEGIN TRANSACTION");
|
This closes the currently active transaction and immediately starts a new one. You might have to experiment with the interval. Make it too large and you will notice periodic hick-ups (when the data gets written to disk), make it to small and you will loose performance. I'd suggest 30 seconds to begin with.
Note: I am not sure if a heartbeat script is the best solution here... I seem to remember they are skipped if the server load is high. Maybe DelayCommand(Executescript()) is better in this case. YMMV. |
Anything wrong with just firing this at the end of db scripts instead of on heartbeat?
p.s. thanks for the awesome nwn tools |
|
Back to top |
|
|
Squatting Monk
Joined: 28 Jun 2007 Posts: 76
|
Posted: Wed Dec 18, 2013 22:15 Post subject: |
|
|
Papillon wrote: | You might have to experiment with the interval. Make it too large and you will notice periodic hick-ups (when the data gets written to disk), make it to small and you will loose performance. I'd suggest 30 seconds to begin with. |
|
|
Back to top |
|
|
slappy
Joined: 16 Dec 2013 Posts: 9
|
Posted: Wed Dec 18, 2013 23:51 Post subject: |
|
|
Squatting Monk wrote: | Papillon wrote: | You might have to experiment with the interval. Make it too large and you will notice periodic hick-ups (when the data gets written to disk), make it to small and you will loose performance. I'd suggest 30 seconds to begin with. |
|
Yeah I saw that. Saving at the end of db scripts seems perfect. Not too frequent to affect performance and not so infrequent that you are committing a ton of data.
So I'm wondering why this isn't mentioned as an option. Am I missing something? Seems way better than any interval to my layman eyes |
|
Back to top |
|
|
Squatting Monk
Joined: 28 Jun 2007 Posts: 76
|
Posted: Thu Dec 19, 2013 3:45 Post subject: |
|
|
If by "at the end of db scripts" you mean "after each query", that kind of defeats the purpose of transactions, as I understand it. The goal is to lessen the amount of processing time spent writing to the database by doing it in chunks. |
|
Back to top |
|
|
slappy
Joined: 16 Dec 2013 Posts: 9
|
Posted: Thu Dec 19, 2013 3:48 Post subject: |
|
|
Squatting Monk wrote: | If by "at the end of db scripts" you mean "after each query", that kind of defeats the purpose of transactions, as I understand it. The goal is to lessen the amount of processing time spent writing to the database by doing it in chunks. |
Ahh, I was figuring "well how often will people be getting persistent variables really", but I guess this is where me not knowing anything at all about sqlite comes into play |
|
Back to top |
|
|
Squatting Monk
Joined: 28 Jun 2007 Posts: 76
|
Posted: Thu Dec 19, 2013 5:53 Post subject: |
|
|
Okay, so basically what happens is that everything in a transaction is saved to the database at once. The more that's in a transaction, the longer it takes, but it's faster to process multiple queries in a transaction versus processing each one in its own. Processing too much in a transaction can stall the nwserver process, though, so there's room for experimentation.
For an analogy, suppose you're in the checkout line at the grocery store. It will take a long time to check out if you have a ton of stuff in your cart, but it's going to be faster than getting in line and buying each item separately. Problem is, you don't wanna have too much stuff in your cart or you risk pissing off the people behind you. So you strike a balance between efficiency and politeness. |
|
Back to top |
|
|
slappy
Joined: 16 Dec 2013 Posts: 9
|
Posted: Thu Dec 19, 2013 5:58 Post subject: |
|
|
Squatting Monk wrote: | Okay, so basically what happens is that everything in a transaction is saved to the database at once. The more that's in a transaction, the longer it takes, but it's faster to process multiple queries in a transaction versus processing each one in its own. Processing too much in a transaction can stall the nwserver process, though, so there's room for experimentation.
For an analogy, suppose you're in the checkout line at the grocery store. It will take a long time to check out if you have a ton of stuff in your cart, but it's going to be faster than getting in line and buying each item separately. Problem is, you don't wanna have too much stuff in your cart or you risk pissing off the people behind you. So you strike a balance between efficiency and politeness. |
I get it, I just can't imagine ppl knocking out quest states fast enough for commit after every query to be a problem. I will defer to the people with way more experience than me though.
Edit: I'm probably just not thinking about scalability, as I run a pretty low traffic server at the moment. My way would probably work great now and then miserably fail if a lot of people were on at once... |
|
Back to top |
|
|
Squatting Monk
Joined: 28 Jun 2007 Posts: 76
|
Posted: Thu Dec 19, 2013 7:49 Post subject: |
|
|
It's not likely to matter much one way or another unless you have a high traffic server or do a lot of database interactions. Keeping scalability in mind is good, though, especially if you don't wanna dive back into your code to fix things when your server grows. |
|
Back to top |
|
|
|