At Codegram, we've been recently working on leveraging Service Oriented Architectures to our products. This comes with real benefits like being able to work on different technologies, databases, teams and architectures in each piece of the product at hand. The right toolset for the right service - so far it's worked great and now we have an ecosystem of smaller services that move faster than a big'ol monolithic application.
But it comes with a cost: Communication. Using HTTP for glueing services together is pretty extended and people use to spend a lot of time and resources trying to speed up HTTP responses using different caching strategies, multi-get request and so. But we tend to forget a really powerful tool: Parallelization. But even if we don't, threads are hard, and nobody likes mangling with mutexes and thread-safety. It just feels outside of our domain for the most of the problems.
Futuroscope jumps on stage!
Futuroscope was created by the following premise: In most of the cases, we don't care about an actual variable's value right now. What if we could run some processes on the background until we need their values?
That's called the future pattern but we actually tried to give it an extra spin thanks to Ruby's duck typing - we don't care wether the parallel object is a promise or not. It walks, swims and quacks like the object we're waiting for. So we don't have to change our API's. It just works right now, for free, in your existing libraries.
Uhmm... Sure. Can we have an example?
Ok, so that's how could deal with an scenario where you need to make 3 searches to Twitter and sum all the result counts:
In my computer, that takes
3.059169 seconds. Here's how you could do it with futuroscope:
1.105958 seconds on the same setup.
We could reimplement this in a more convenient way using futuroscope's map syntax:
Is that black magic?
This is actually what's happening behind the scenes:
- When you warm up your Ruby script and load futuroscope, it's automatically creating a pool of threads eager to process stuff.
- When you create a future, will send that block to be processed by the thread pool and will immediately return a proxy object that delegates everything to that block's return value, without blocking the main thread.
- When you try to call any method on that proxy, it will either wait for the results to complete or return them immediately if already finished processing them.
You can check out futuroscope on GitHub.
- Futuroscope creates a thread pool of 8 workers by default, which will auto-scale up to 16 when it's given a lot of work. That's configurable. That's meant to help memory management and for performance.
- You could reach dead locks when you're calling futures inside of futures (because of the thread pool). You shouldn't be doing that.
- Futuroscope takes its name from an awesome amusement park located near Poitiers, France, and which I believe heavily shaped my childhood - not implying for good. You should check it out.
- Futuroscope will works great on MRI, Rubinius and JRuby in 1.9 or 2.0 syntax.
- On MRI, you'll only notice performance improvements if you're doing IO in between, because is not able to run ruby code in parallel (blame the Global Interpreter Lock).
We're eager of getting feedback from you. Please check out futuroscope on GitHub and let us now if you find any issues or can think of ways to improve it!