Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about the commit strategy #66

Open
decimad opened this issue Apr 19, 2018 · 5 comments
Open

Question about the commit strategy #66

decimad opened this issue Apr 19, 2018 · 5 comments

Comments

@decimad
Copy link

decimad commented Apr 19, 2018

Hello, I could not find a source of information about when to place strategic commits. It appeared to me, that unqlite would not commit anything to the file storage until I explicitly commit or close the database. Is there some resource talking about ways to avoid a huge commit on closing the database?

@symisc
Copy link
Owner

symisc commented Apr 23, 2018

Hi,

The commit strategy is implemented in the file pager.c and largely inspired from the SQLite3 model. Basically, disk commit occurs when:

  • Internal memory page cache is filled. You can set a page cache limit via unqlite_config() with a verb set to UNQLITE_CONFIG_MAX_PAGE_CACHE. Once this limit reached, disk commit may occurs.
  • Manually commit the transaction via unqlite_commit().
  • When you close your database handle via unqlite_close().

@decimad
Copy link
Author

decimad commented Apr 23, 2018

Hello, thank your for the reply!

I was very interested in using UNQLITE_CONFIG_MAX_PAGE_CACHE, however I realized that the passed value is completely ignored in the default driver, at least in the latest head I downloaded. Basically I'm trying to use unqlite for the cquery project, but I'm running in the memory wall. Unqlite will buffer 30 gb and more I throw at it in memory, no matter how often I place manual commits. I thought that config setting would do the trick, but following the code paths, it merely sets a member in a structure and is not referenced again. :(

@symisc
Copy link
Owner

symisc commented Apr 24, 2018

Yes, page caching is delegated to the underlying storage engine which is not required to honor the MAX_PAGE_CACHE instruction. However, you could easily hack the storage engine code now that you have followed the path by just copying the unqlite_close() core code (except closing the handle) when the page cache reaches a maximum limit.

@symisc
Copy link
Owner

symisc commented Apr 27, 2018

Please update the library to the last version (1.1.9). There was a memory leak in unqlite_commit() that caused internal data not to be freed. This should solve your issue.

@decimad
Copy link
Author

decimad commented Apr 28, 2018

Thank you, I'm giving it a try!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants