Here's to testing how a stream with the same title interacts with an existing stream.
Is this how one "shares" a stream?
flaper87 and kgriffs are so in sync. It's how we reach consensus so quickly on new ideas - it's because they actually communicate telepathically!
In other news, the discussions around how to implement sharding are starting to come together. We've been putting together pieces, and as they've come to light, it's been getting easier to see what works and what doesn't.
Next up: storage controllers to manage shard registration and mapping of queues to shards!
Sharding news:
Patches are starting to get merged on the admin API side of things. We now have a clear distinction between control plane and data plane storage drivers. This distinction made abstractions easier to cope with further along.
There's also a more encompassing notion of an admin API instance. An admin instance contains all the routes/resources/powers that a public API instance has, plus access to control plane features. In the case of sharding, an admin instance allows an operator to register shards and investigate the state of the catalogue that maps queues to shards.
Next up: finish getting the admin API for sharding merged in, get the catalogue portion reviewed, and then put it all together!
Marconi sharding progresses a little bit more. Development is proceeding at a frantic pace!
Latest today: catalogue storage driver was merged to main line.
The journey towards sharding capability in Marconi is an epic one. There's been lots of help from everyone towards deploying it, testing it, and making sure that the reference implementation is solid.
It's going to be awesome when this feature is ready. Only a bit more to go, and it's likely to be ready before the Openstack Summit!
Marconi sharding is getting very close:
Next up: caching, perf testing, functional testing
Marconi Redis is one step closer to being a proper backend for Marconi:
``` alejandro@rainbow-generator:~/Development/marconi-redis:[master %>]$ tox -e pep8 GLOB sdist-make: /home/alejandro/Development/marconi-redis/setup.py pep8 inst-nodeps: /home/alejandro/Development/marconi-redis/.tox/dist/marconi-redis-backend-0.8.0.a37.gb18597a.zip pep8 runtests: commands[0] | flake8 pep8: commands succeeded congratulations :)
```
``` RedisCatalogueTests test_catalogue_entry_life_cycle ERROR 0.02 test_exists ERROR 0.01 test_get ERROR 0.01 test_get_raises_if_does_not_exist ERROR 0.01 test_list ERROR 0.01 test_update ERROR 0.01 test_update_raises_when_entry_does_not_exist ERROR 0.01 RedisClaimTests test_claim_lifecycle ERROR 0.02 test_do_not_extend_lifetime ERROR 0.03 test_expired_claim ERROR 0.01 test_extend_lifetime ERROR 0.03 test_extend_lifetime_with_grace_1 ERROR 0.03 test_extend_lifetime_with_grace_2 ERROR 0.03 test_illformed_id ERROR 0.01 RedisDriverTest test_control_db_instance OK 0.00 test_data_db_instance OK 0.00 RedisMessageTests test_bad_claim_id OK 0.01 test_bad_id OK 0.01 test_bad_marker OK 0.01 test_claim_effects FAIL 0.03 test_expired_messages FAIL 0.01 test_get_multi FAIL 0.03 test_message_lifecycle OK 0.01 test_multi_ids OK 0.01 RedisQueueTests test_list_None OK 0.02 test_list_project OK 0.02 test_queue_lifecycle ERROR 1.23 test_stats_for_empty_queue OK 0.01 RedisShardsTests test_create_replaces_on_duplicate_insert ERROR 0.00 test_create_succeeds ERROR 0.00 test_delete_nonexistent_is_silent ERROR 0.00 test_delete_works ERROR 0.00 test_detailed_get_returns_expected_content ERROR 0.00 test_drop_all_leads_to_empty_listing ERROR 0.00 test_exists ERROR 0.00 test_get_raises_if_not_found ERROR 0.00 test_get_returns_expected_content ERROR 0.00 test_listing_simple ERROR 0.00 test_update_raises_assertion_error_on_bad_fields ERROR 0.00 test_update_works ERROR 0.00 Ran 40 tests in 1.679s
FAILED (errors=27, failures=3) ```
In short:
Support for both FIFO and non-FIFO Redis is baked in. All it takes is flipping one configuration option and it just works:
conf
[queues:storage:driver:redis]
fifo = True
More updates coming tomorrow.
More progress on Marconi Redis:
``` alejandro@rainbow-generator:~/Development/marconi-redis:[master %>]$ MARCONI_TEST_REDIS=1 tox -e py27 -- tests.unit.queues.storage.test_impl_redis:RedisShardsTests GLOB sdist-make: /home/alejandro/Development/marconi-redis/setup.py py27 inst-nodeps: /home/alejandro/Development/marconi-redis/.tox/dist/marconi-redis-backend-0.8.0.a37.gb18597a.zip py27 runtests: commands[0] | nosetests tests.unit.queues.storage.test_impl_redis:RedisShardsTests
RedisShardsTests test_create_replaces_on_duplicate_insert OK 0.10 test_create_succeeds OK 0.02 test_delete_nonexistent_is_silent OK 0.02 test_delete_works OK 0.02 test_detailed_get_returns_expected_content OK 0.01 test_drop_all_leads_to_empty_listing OK 0.01 test_exists OK 0.01 test_get_raises_if_not_found OK 0.01 test_get_returns_expected_content OK 0.01 test_listing_simple OK 0.02 test_update_raises_assertion_error_on_bad_fields OK 0.01 test_update_works OK 0.01 ```
Shard storage tests now pass!
It took some finagling, and implementing a list find command in Python. Since the Marconi API expects paginated results in alphabetical order (rather than lexicographical), Redis LIST commands had to be used, rather than SORTED SETS.
Correction
Marconi API expects lexicographical sorting.
Marconi Redis now supports catalogue storage!:
``` alejandro@rainbow-generator:~/Development/marconi-redis:[master %=]$ MARCONI_TEST_REDIS=1 tox -e py27 -- tests.unit.queues.storage.test_impl_redis:RedisCatalogueTests GLOB sdist-make: /home/alejandro/Development/marconi-redis/setup.py py27 inst-nodeps: /home/alejandro/Development/marconi-redis/.tox/dist/marconi-redis-backend-0.9.0.zip py27 runtests: commands[0] | nosetests tests.unit.queues.storage.test_impl_redis:RedisCatalogueTests
RedisCatalogueTests test_catalogue_entry_life_cycle OK 0.11 test_exists OK 0.01 test_get OK 0.01 test_get_raises_if_does_not_exist OK 0.01 test_list OK 0.02 test_update OK 0.01 test_update_raises_when_entry_does_not_exist OK 0.01
Slowest 1 tests took 0.11 secs: 0.11 RedisCatalogueTests.test_catalogue_entry_life_cycle
Ran 7 tests in 0.191s
OK ```
Marconi Redis still needs more work. Queues, shards, and catalogue are working. Messages are mostly working. Claims are not working at all.
Marconi sharding is officially merged in! It's now possible to set up multiple storage nodes and partition queues amongst them. This makes it possible to scale Marconi quite a bit.
Thanks goes to everyone on the Marconi team and much of the team at the Atlanta Rackspace office for helping make this happen!
Marconi is growing - and could use help.
Before too much longer, we're going to be branching a notifications project off of the Marconi code base. That's to say, it'll grow along with queues, and might even be able to be launched with a unified API, but I see that it'll become it's own project some day.
Then there's the need to support more storage back ends. Currently, we support mongodb and sqlite. There's work started on sqlalchemy (awesome!) and you've probably already heard about my own efforts at redis support.
How about those transports? We only support WSGI/HTTP at the moment, implemented on top of the lean and lovely Falcon framework. We also have an extremely experimental websockets implementation available contributed by flaper87. I'm hoping to see a zeromq transport in the future. Then there's the upcoming nanomsg transport, as well. It's an exciting area, and I hope you'll join in and share your thoughts.
Features: there's many more planned. Better operational stats, queue quotas, queue flavors, Heat integration, Horizon integration, Tempest integration - just check out our blueprints page!
What's next on my plate? I hope to wrap up Marconi Redis, at least in beta form.
Architecturally, the biggest scalability bottleneck in Marconi's design at the moment is the dependency on FIFO semantics for queuing operations.
Each queue maps to a particular shard. This sharding design helped Marconi overcome its first scaling bottleneck, which was being able to handle many messages to multiple queues. Now, since queues can live on different storage nodes, it's possible to scale out a Marconi deployment.
However, there's still FIFO semantics to contend with. The FIFO invariant is enforced by the storage layer by taking advantage of atomic commit semantics when posting messages. This marker is unique to each queue. This is done so that messages are claimable in the order that they arrived. If messages would try to post concurrently, then one of those POST operations would "fail", causing an internal retry that isn't exposed to the connecting client. By fail, I mean, the operation is retried shortly after.
Fortunately, storage driver implementations for Marconi need not honor the FIFO property. If a particular workload does not require FIFO, then it becomes much easier to scale out such a deployment.
I can envision a future feature involving queue flavors where users submitting requests to Marconi can annotate their queue at creation time with attributes they care about. For example:
``` PUT /v1/queues/fifo_queue
{ "fifo": true, }
PUT /v1/queues/fast_queue
{ "persist": false, "fifo": false }
PUT /v1/queues/make_it_last
{ "persist": true, "fifo": false } ```
I've identified the flavors persist
and fifo
so far for choosing storage shards automatically. An example of a persistent storage flavor that offers FIFO is Marconi's reference mongodb implementation, that can be deployed with replica sets for added reliability. An example of a storage driver that's the polar opposite of that is my marconi-redis work-in-progress. Since the data is maintained in memory, it can be lost at any time.
It's all about configuration and letting the user choose what they need. I hope to see more of this in the future of Marconi.