firebase - How can I create a queue with multiple workers? -


i want create queue clients can put in requests, server worker threads can pull them out have resources available.

i'm exploring how firebase repository, rather external queue service have inject data firebase.

with security , validation tools in mind, here simple example of have in mind:

  • user pushes request "queue" bucket
  • servers pull out request , deletes (how ensure 1 server gets request?)
  • server validates data , retrieves private bucket (or injects new data)
  • server pushes data and/or errors user's bucket

enter image description here

a simplified example of might useful authentication:

  • user puts authentication request public queue
  • his login/password goes private bucket (a place can read/write into)
  • a server picks authentication request, retrieves login/password, , validates against private bucket server can access
  • the server pushes token user's private bucket

(certainly there still security loopholes in public queue; i'm exploring @ point)

some other examples usage:

  • read status queue (user status communicated via private bucket, server write's public bucket read-only public)
  • message queue (messages sent via user, server decides discussion buckets dropped into)

so questions are:

  1. is design integrate upcoming security plans? alternative approaches being explored?
  2. how servers listen queue, 1 pick each request?

wow, great question. usage pattern we've discussed internally we'd love hear experience implementing (support@firebase.com). here thoughts on questions:

authentication

if primary goal authentication, wait our security features. :-) in particular, we're intending have ability auth backed own backend server, backed firebase user store, or backed 3rd-party providers (facebook, twitter, etc.).

load-balanced work queue

regardless of auth, there's still interesting use case using firebase backbone sort of workload balancing system describe. that, there couple approaches take:

  1. as describe, have single work queue of servers watch , remove items from. can accomplish using transaction() remove items. transaction() deals conflicts 1 server's transaction succeed. if 1 server beats second server work item, second server can abort transaction , try again on next item in queue. approach nice because scales automatically add , remove servers, there's overhead each transaction attempt since has make round-trip firebase servers make sure nobody else has grabbed item queue already. if time takes process work item greater time round-trip firebase servers, overhead isn't big deal. if have lots of servers (i.e. more contention) and/or lots of small work items, overhead may killer.
  2. push load-balancing client having them choose randomly among number of work queues. (e.g. have /queue/0, /queue/1, /queue/2, /queue/3, , have client randomly choose one). each server can monitor 1 work queue , own of processing. in general, have least overhead, doesn't scale seamlessly when add/remove servers (you'll need keep separate list of work queues servers update when come online, , have clients monitor list know how many queues there choose from, etc.).

personally, i'd lean toward option #2 if want optimal performance. #1 might easier prototyping , fine @ least initially.

in general, design on right track. if experiment implementation , run problems or have suggestions our api, let know (support@firebase.com :-)!


Comments

Popular posts from this blog

c# - SVN Error : "svnadmin: E205000: Too many arguments" -

c# - Copy ObservableCollection to another ObservableCollection -

All overlapping substrings matching a java regex -