Optimizing for loop in php -


i have been running foreach loop 1000 times on php page. code inside foreach loop looks below:

$first          = mysql_query("select givenname first_names order rand() limit 1"); $first_n        = mysql_fetch_array($first); $first_name     = $first_n['givenname']; $last           = mysql_query("select surname last_name order rand() limit 1"); $last_n         = mysql_fetch_array($last); $last_name      = $last_n['surname']; $first_lastname = $first_name . " " . $last_name;   $add     = mysql_query("select streetaddress user_addresss order rand() limit 1");   $addr    = mysql_fetch_array($add); $address = $addr['streetaddress'];  $unlisted  = "unlisted"; $available = "available";  $arr = array(     $first_lastname,     $address,     $unlisted,     $available ); 

then have been using array_rand function randomized value each time loop runs:

<td><?php echo $arr[array_rand($arr)] ?></td> 

so loading php page taking long time. there way optimize code. need unique value each time loop runs

the problem not php foreach loop. if order mysql table rand(), making serious mistake. let me explain happens when this.

every time make mysql request, mysql attempt map search parameters (where, order by) indices cut down on data read. load relevant info in memory processing. if info large, default writing disk , reading disk perform comparison. want avoid disk reads at costs inefficient, slow, repetitive , can flat-out wrong under specific circumstances.

when mysql finds index possible used, load index table instead. index table hash table between memory location , value of index. so, instance, index table primary key looks this:

  id      location   1         0 bytes in   2         17 bytes in   3         34 bytes in 

this extremely efficient large index tables can fit in tiny amounts of memory.

why talking indices? because using rand(), preventing mysql using them. order rand() forces mysql create new random value each row. requires mysql copy table data in called temporary table, , add new field rand() value. table big store in memory, stored disk.

when tell mysql order rand(), , table created, mysql compare every single row pairs (mysql sorting uses quicksort). since rows big, you're looking @ quite few disk reads operation. when done, returns, , data -at huge cost.

there plenty of ways prevent massive overhead snafu. 1 of them select id rand() maximum index , limit 1. not require creation of field. there plenty of similar stack questions.


Comments

Popular posts from this blog

jquery - How can I dynamically add a browser tab? -

node.js - Getting the socket id,user id pair of a logged in user(s) -

keyboard - C++ GetAsyncKeyState alternative -