java - When is it appropriate to multi-thread? -
i think "get" basics of multi-threading java. if i'm not mistaken, take big job , figure out how going chunk multiple (concurrent) tasks. implement tasks either runnable
s or callable
s , submit them executorservice
. (so, begin with, if mistaken on much, please start correcting me!!!)
second, have imagine code implement inside run()
or call()
has "parallelized" possible, using non-blocking algorithms, etc. , hard part (writing parallel code). correct? not correct?
but real problem i'm still having java concurrency (and guess concurrency in general), , true subject of question, is:
when appropriate multi-thread in first place?
i saw example question on stack overflow poster proposed creating multiple threads reading , processing huge text file (the book moby dick), , 1 answerer commented multi-threading purpose of reading disk terrible idea. reasoning because you'd have multiple threads introducing overhead of context-switching, on top of slow process (disk access).
so got me thinking: what classes of problems appropriate multi-threading, classes of problems should serialized? in advance!
multi-threading has 2 main advantages, imo:
- be able distribute intensive work across several cpu/cores: instead of letting 3 of 4 cpu idle , on single cpu, split problem in 4 parts, , let each cpu work on own part. reduces time takes execute cpu-intensive task, , justifies money spent on multi-cpu hardware
- reduce latency of many tasks. suppose 4 users make request web server, , requests handled single thread. suppose first request makes long database query. thread idle, waiting query complete, , 3 other users wait until request finished tiny web page. if have 4 threads, single cpu, second, third , fourth requests can handled while long database query executed database server, , users happy. multi-threading important when have blocking io calls, since blocking io calls let cpu idle, instead of executing other waiting tasks.
note: problem reading same disk multiple threads instead of reading whole long file sequentially, force disk switch between various physical locations of disk @ each context switch. since threads waiting disk-reading finish (they're io-bound), makes reading slower if single thread read everything. once data in memory, make sense split work between threads.
Comments
Post a Comment