Comment on page
As of this writing, we've indexed nearly 1M Threads and tens of thousands of blog posts. These numbers continue to grow. Scaling across our entire frontend and backend is required. One of the key infrastructure improvements we're working on is called LeoCache. LeoCache is a database that contains all of the Threads and Posts ever created.
LeoCache powers much of the frontend and backend of the INLEO Platform. One key feature on the UI that is powered by LeoCache is full-text search. This requires a lot of processing power and we continue to scale both the size and efficiency of this backend.
LeoCache is essentially a caching layer or database that collects, stores and allows us to retrieve key data. You could envision LeoCache as a kind of "Brain" of the entire INLEO Platform.
User settings, preferences, status, instant threads, instant votes, etc. are all governed by LeoCache. LeoCache often serves as an intermediary layer between INLEO and the Hive blockchain - enabling us to have instant actions on the frontend, but queued transactions on the backend.
The Hive Blockchain is incredibly fast. It has 3 second block times. That being said, most of us don't want to wait 3 seconds for an upvote to be cast or a comment to be posted.
LeoCache allows you to do all of that instantly and then it creates a queue of all the current user actions. That queue is then pushing the data to the Hive blockchain in the 3 second block confirmation times.
This page is meant to describe LeoCache at a high level without diving too much into the technical nature of LeoCache. The future of LeoCache primarily lies in scaling.
As INLEO's userbase grows, we have a growing need for CPU and storage. We need to scale both the efficiency of data storage/retrieval while simultaneously increasing the size of our databases.
LeoCache "runs the show" so to speak. We have a lot of features that leverage the power of LeoCache including our algorithmic feeds, LeoAI and more.
Last modified 2d ago