You can use CURL to download a web page.

 public function curl($url) 
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE);
curl_setopt($ch, CURLOPT_AUTOREFERER, TRUE);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 120);
curl_setopt($ch, CURLOPT_TIMEOUT, 120);
curl_setopt($ch, CURLOPT_MAXREDIRS, 10);
curl_setopt($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1a2pre) Gecko/2008073000 Shredder/3.0a2pre ThunderBrowse/3.2.1.8");
$data = curl_exec($ch);
curl_close($ch);
return $data;
}

The above function does the following:

  • Initializing cURL
  • Setting cURL's URL option with the $url variable passed into the function
  • Setting cURL's option to return the web page data
  • Setting cURL to follow 'location' HTTP headers
  • Automatically set the referer where following 'location' HTTP headers
  • Setting the amount of time (in seconds) before the request times out
  • Setting the maximum amount of time for cURL to execute queries
  • Setting the maximum number of redirections to follow
  • Setting the useragent
  • Executing the cURL request and assigning the returned data to the $data variable
  • Closing cURL
  • Finally, returning the data from the function

This function is then used as such:

$scraped_website = curl("http://www.example.com");