java - Simple web server when high-concurrency is met -
this not homework, interview question found on web.
the java code is:
public class simplewebserver{ public static void handlerequest(socket c) { //process request } public static void main(string[] args) throws ioexception { serversocket server=new serversocket(80); while(true) { final socket connection=server.accept(); runnable task=new runnable(){ @override public void run() { handlerequest(connection); } }; new thread(task).start(); } } }
the question potential issue there when there high concurrency? analysis :
- it didn't use synchronized keyword, there might situations race-condition happen.
- it should use thread pool, more efficient.
- it seems each incoming thread, class creates new serversocket, consume lot of space when high-concurrency happens?
the main problem see this, 1 you've identified. thread-per-request model inherently flawed (as evidenced widescale adoption of nginx , lighttpd on apache). moving executorservice (probably backed thread pool) choice here.
by changing thread-per-request simple submission of task executorservice, moving application towards event-based model. there's lot of material out on web preaching scalability virtues of event-based on thread-based models.
using 'synchronized' on handlerequest
method pretty brute-force tactic, , depending on particular guts of method more granular locking strategy (or lock-free logic) preferred. serversocket creation mentioned happens once application, that's not scalability problem. accept
method create new socket
instance each connection, these cheap. looking @ jdk 6 source, consists of allocating 7 booleans, lock object
, , checking internal state - i.e., not going problem.
basically, you're on right track! hope helps.
Comments
Post a Comment