PHP Singleton? Not really

December 7th, 2010 by Sjan Evardsson

If you couldn’t tell by the long silence, things around here have been not so quiet as I had hoped. However, while reading the PHP: Patterns page I came across a large number of implementations of the Singleton pattern.

I happen to like the Singleton pattern, and use it in Java and Python (where the VM maintains the “one and only one” instance) but no so much in PHP.

Why, you ask? It is simply this: you cannot create a true Singleton in a PHP web application. Since every page load is executed in a separate thread, every page load has it’s own instance of the class. Any hope of true Singleton behavior is lost.

As a way to illustrate this, here is a PHP “Singleton” class and an associated PHP page. Throw them up on a test server and hit the page.

Then try to increment the counter. See what happens, I’ll wait.

The class:

<?php
/**
 * test for PHP multi-threaded singleton
 */
class Singleton {
  private static $instance;
  private $counter;

  /**
   * Constructor is private
   */
  private function __construct() {
    $this->counter = 0;
  }

  /**
   * Entry point. Get the static instance of Singleton
   */
  public static function getInstance() {
    if (is_null(self::$instance)) {
      self::$instance = new Singleton();
    }
    return self::$instance;
  }

  public function __clone() {
    trigger_error('Clone not allowed for '.__CLASS__, E_USER_ERROR);
  }

  public function incrementCounter() {
    $this->counter++;
  }

  public function getCounter() {
    return $this->counter;
  }
}
?>

The page:

<?php
include_once('singletontest.php');
$s = Singleton::getInstance();
if (isset($_GET['inc'])) {
  $s->incrementCounter();
}
?>
<html>
<head><title>Multi-threading PHP Singleton? Not Likely</title></head>
<body>
<h3>Singleton test</h3>
<p>The counter is at <?php echo $s->getCounter(); ?></p>
<pre><?php var_dump($s); ?></pre>
<p><a href="<?php echo $_SERVER['PHP_SELF']?>?inc=1">Increment the counter</a></p>
</body>
</html>

In this first version, even within one browser the limitations are clear. The Singleton instance is recreated on every page load. So, what if we serialize our $counter variable to disk? Will that help? Let’s try it.

The modified class:

<?php
/**
 * test for PHP multi-threaded singleton
 */
class Singleton {
  private static $instance;
  private $counter;

  /**
   * Constructor is private
   */
  private function __construct() {
    $init = 0;
    if (file_exists('/tmp/singleton.ser')) {
      $str = file_get_contents('/tmp/singleton.ser');
      $init = unserialize($str);
    }
    $this->counter = $init;
  }

  /**
   * Entry point. Get the static instance of Singleton
   */
  public static function getInstance() {
    if (is_null(self::$instance)) {
      self::$instance = new Singleton();
    }
    return self::$instance;
  }

  public function __clone() {
    trigger_error('Clone not allowed for '.__CLASS__, E_USER_ERROR);
  }

  /**
   * Since PHP does not create "only one" instance globally, but by thread, we
   * need a way to store our instance variables so that each thread is getting
   * the same values.
   * Note that threads holding a version of this will have the old value until
   * they reload the Singleton (by a page refresh, etc).
   */
  public function incrementCounter() {
    // We need to update the serialized value
    $handle = fopen('/tmp/singleton.ser', 'w+');
    // Get an EXCLUSIVE lock on the file to block any other reads/writes while
    // we modify
    if (flock($handle, LOCK_EX)) {
      // Only update the instance variable's value AFTER we have a lock
      $this->counter++;
      // empty the file
      ftruncate($handle, 0);
      // write out the value
      fwrite($handle, serialize($this->counter));
      // and unlock so that everyone else can read the new value
      flock($handle, LOCK_UN);
    } else {
      // You would probably prefer to throw an Exception here
      echo "Couldn't get the lock!";
    }
    fclose($handle);
  }

  public function getCounter() {
    return $this->counter;
  }
}
?>

The modified page:

<?php
include_once('singletontest.php');
$s = Singleton::getInstance();
if (isset($_GET['inc'])) {
  $s->incrementCounter();
} else if (isset($_GET['ext'])) {
  $x = true;
}
?>
<html>
<head><title>Multi-threading PHP Singleton? Not Likely</title></head>
<body>
<h3>Singleton test</h3>
<p>The counter is at <?php echo $s->getCounter(); ?></p>
<pre><?php var_dump($s); ?></pre>
<?php if ($x) for ($i = 0; $i < 1000; $i++) {
  $s = Singleton::getInstance();
  echo '<p>The counter is at '.$s->getCounter().'</p><p>';
  // wait
  for ($j = 0; $j < 10000; $j++) { echo '. '; }
    echo '</p>';
  }
?>
<p><a href="<?php echo $_SERVER['PHP_SELF']?>?inc=1">Increment the counter</a></p>
<p><a href="<?php echo $_SERVER['PHP_SELF']?>?ext=1">Do a long list</a> 
fire this off in one browser and increment in another.</p>
</body>
</html>

Using the modified versions above, open two separate browsers. Point both at the page and increment in one then reload the other. So far so good. Now set off the long list in the one and increment in the other while it is still running. What happened? The Singleton pattern works within a given thread, so for as long as that thread runs, changes made to the Singleton’s serialized data will not be available in another thread. There is a possible work-around, which would be to read and unserialize the value every time getCounter() is called. At the expense of a little more overhead the expected behavior in terms of object state can be obtained. But back to the real question: Is it a Singleton? Well no, not really in the sense that most of us think of a Singleton, which is system or application-wide. But it is at least within its containing thread, which might make it more useful for command-line PHP in long-running scripts. (Like those report generation scripts that you are running in a daily cron job that join 18 tables and generate 500,000 line csv files … no? Just me?)

Looking into evercookie

September 23rd, 2010 by Sjan Evardsson

Things have been rather quiet around here lately as I have been busy with work and school. Something in my twitter stream yesterday caught my eye, though. It seems that Samy Kamkar has come up with a way to make a seriously persistent cookie. How does it work? By storing the cookie value in (currently) 10 different methods.

  • Standard HTTP Cookies
  • Local Shared Objects (Flash Cookies)
  • Storing cookies in RGB values of auto-generated, force-cached PNGs using HTML5 Canvas tag to read pixels (cookies) back out
  • Storing cookies in and reading out Web History
  • Storing cookies in HTTP ETags
  • Internet Explorer userData storage
  • HTML5 Session Storage
  • HTML5 Local Storage
  • HTML5 Global Storage
  • HTML5 Database Storage via SQLite

It seems from the site that this is a project in current development with even more methods to come. Currently the only mitigation is using Safari in Privacy Mode which destroys all versions of the evercookie on browser restart. In the coming weeks I will have some time to spend on personal projects, and I may use some of that time to look into this further.

Fix for firehol get-iana script

June 28th, 2010 by Sjan Evardsson

I have talked before about using firehol to configure iptables. I won’t go into all the details about how wonderful and awesome it is, but trust me, it makes configuring iptables a snap.

Firehol includes a script, get-iana.sh, that downloads the IPv4 address space list from IANA and populates a file called RESERVED_IPS that firehol uses when configuring iptables. Basically, any traffic from outside coming from any reserved or unallocated IP block is dropped automatically. As you can imagine, keeping this file updated regularly is important, as previously unallocated blocks are allocated for use. To this end, whenever firehol starts it checks the age of the RESERVED_IPS file and if it is older than 90 days warns you to update it by running the supplied get-iana.sh.

However, there has been a change recently in how the IANA reserved IPv4 address space file is formatted. There are lots of posts on plenty of forums with patches for get-iana.sh to accept and use the new format plain text file (while the default is now XML rather than plain text) and needless to say I tried every single one I could find. None of them worked, so what to do? How about a complete rewrite in Python? And while we’re at it, let’s use the XML format that IANA wants everyone to use.

So, one lunch hour of hacking and here it is, working like a charm. You can copy this, but I recommend downloading it to avoid whitespace issues.

#!/usr/bin/python

"""
file: get-iana.py

Replacement for get-iana.sh that ships with firehol and no longer seems to work.
This is less code, less confusing, uses the preferred XML format from IANA and works.

Copyright (c) 2010 Sjan Evardsson

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
"""

import urllib
import xml.dom.minidom
import os
urllib.urlretrieve('http://www.iana.org/assignments/ipv4-address-space/ipv4-address-space.xml','address-space.xml')
results = []
x = xml.dom.minidom.parse('address-space.xml')
for i in x.childNodes:
    if i.localName == 'registry':
        for j in i.childNodes:
            if j.localName == 'record':
                for k in j.childNodes:
                    if k.localName == 'prefix':
                        prefix = k.firstChild.data
                    if k.localName == 'status':
                        status = k.firstChild.data
                if status == 'RESERVED' or status == 'UNALLOCATED':
                    results.append(prefix)
outfile = open('iana-temp','w')
for r in results:
    hi = int(r.split('/')[0])
    outfile.write(str(hi)+'.0.0.0/8\n')
outfile.close()
os.remove('address-space.xml')
os.rename('/etc/firehol/RESERVED_IPS','/etc/firehol/RESERVED_IPS.old')
os.rename('iana-temp','/etc/firehol/RESERVED_IPS')

International ME/CFS Awareness Day

May 12th, 2010 by Sjan Evardsson

Today is (was?) International ME/CFS Awareness Day, and the Sock It 2 ME/CFS project is officially launched. Hoping to do for ME/CFS sufferers, research budgets and families what the AIDS quilt did for HIV, the “sock project” has the potential to open a lot of eyes.

Some quick info from the site:

What is Myalgic Encephalomyelitis/Chronic Fatigue Syndrome?

Myalgic Encephalomyelitis, or Chronic Fatigue Syndrome, as it’s known in the US, is a debilitating disease which has been classified by the World Health Organization (WHO) as an organic, infectious neuro-immune disorder since 1969. It can occur in both epidemic and sporadic forms; over 60 outbreaks of ME/CFS have been recorded worldwide since 1934.

ME/CFS …

  • causes more functional impairment than diabetes, heart failure or kidney disease.
  • creates a level of disability comparable to MS, chemotherapy or the final stages of AIDS.
  • strikes an estimated 17 to 20 million worldwide, impairing function and shortening lives.
  • like AIDS in the early days, gets inadequate funding due to widespread misunderstanding.
  • has only recently gained notice in blood banks internationally as an infectious disease concern.
  • Apache and PHP HTTP PUT Voodoo

    April 27th, 2010 by Sjan Evardsson

    While trying to work out the details for a PHP REST utility I kept running into a wall when it came to using HTTP PUT (and HTTP DELETE) with Apache 2.2 and PHP 5. There are plenty of scattered tidbits of information relating to this on forums about the web, many old, and many more incomplete or even unhelpful. [As a side note: if someone on a forum you frequent is asking for help with getting HTTP PUT to work in Apache, telling them "Don't use PUT it lets the hax0rs put files on your server! N00b! Use POST LOL!!11!" is not helping, nor does it make you look intelligent.]

    The first hint I came across was putting Script PUT put.php in your httpd.conf in the <Directory> section. (That is, of course, assuming that your script for handling PUT requests is called put.php.)

    I tried that and on restarting Apache got the error “Invalid command ‘Script’, perhaps misspelled or defined by a module not included in the server configuration” – which lead to a short bit of research (thanks Google!) that pointed out that the Script directive requires mod_actions be enabled in Apache. I did that and then tried to hit my script with a PUT request, to which I got a 405 error: “The requested method PUT is not allowed for the URL /test/put.php”.

    Well, that was certainly strange, so I added <Limit> and <LimitExcept> blocks to my <Directory> section, but to no avail. So I changed the <Directory> directive from <Directory /var/www/test> to <Directory /var/www/test/put.php>. It looked strange, but what the heck, worth a try. I could now do PUT requests, but only as long as the url was /test/put.php, and that is not what is wanted when putting together a RESTful application. Trying to do anything useful, like a PUT to /test/put.php/users/ resulted in more 405 errors, now saying “The requested method PUT is not allowed for the URL /test/put.php/users/”.

    So, back to the httpd.conf to change the <Directory> back to the previous. And then on to the other method I saw in a few places, using mod_rewrite to forward PUT (and DELETE) requests to the script. Of course, everywhere I saw this listed it was claimed that this alone (without the Script directive) was enough to enable PUT. So, I commented out the Script directive and added some mod_rewrite statements to the .htaccess file (which is always preferable in development as you can make changes on the fly without reloading or restarting the server.) So I added a RewriteCond %{REQUEST_METHOD} (PUT|DELETE) and a RewriteRule .* put.php.

    And, I went back to test it again and, big surprise, got a 405 error again. Now, even when pointing directly at /test/put.php I got a 405 error. So, I decided to try combining the two. I uncommented the lines in the httpd.conf and bumped the server and was pleasantly surprised that PUT (and DELETE) requests to the /test/ directory were properly handled by the script. Now I could do something useful, like add another mod_rewrite rule to send all traffic for /api/ to the /test/put.php and call /api/users/ with a PUT (or DELETE) request and it was properly handled!

    So, putting it all together:

    In Apache: enable mod_actions and mod_rewrite. In Gentoo: make sure the lines

    LoadModule actions_module modules/mod_actions.so

    and

    LoadModule rewrite_module modules/mod_rewrite.so

    in httpd.conf are not commented out. In Debian the commands

    a2enmod actions

    and

    a2enmod rewrite

    do the trick.

    In the httpd.conf add the following:

    <Directory /var/www/test>
        <Limit GET POST PUT DELETE HEAD OPTIONS>
            Order allow,deny
            # You might want something a little more secure here, this is a dev setup
            Allow from all
        </Limit>
        <LimitExcept GET POST PUT DELETE HEAD OPTIONS>
            Order deny,allow
            Deny from all
        </LimitExcept>
        Script PUT /var/www/test/put.php
        Script DELETE /var/www/test/put.php
    </Directory>
    

    And finally, in the .htaccess add the rewrite voodoo:

    RewriteEngine On
    RewriteBase /test
    RewriteRule ^/?(api)/? put.php [NC]
    RewriteCond %{REQUEST_METHOD} (PUT|DELETE)
    RewriteRule .* put.php
    

    Hopefully this works as well for you as it did for me. Now to get back to business of actually writing the code to deal with the request and dispatch it appropriately (which may be a post for another day, or you can have a look at how some others have done it.)

    By the way, for testing I have found the Firefox plugin Poster to be immensely useful, as well as the Java based RESTClient.

    I need a break …

    April 19th, 2010 by Sjan Evardsson

    I don’t usually talk much about my day-to-day life here, but that doesn’t mean I never do. This is one of those times. If you just want more tech talk check out the end of this post. The rest is all me whinging anyway. ;)

    I need a break. A real break. I mean, I am technically on a break right now from school but it doesn’t really feel that way. I finished out my first year of school a couple weeks ago (26 years after graduating from high school, no less) and I thought “wow, I have an entire month that I can use to rest, catch up on some personal stuff, maybe clean out the garage ….” Unfortunately it is not turning out that way.

    Instead I am writing this at 5:54 in the morning as this has been the first chance I have had to pay any attention at all to the blog. So what has been keeping me busy? Well, first, there is work. I did use a little of what would have been study time to modify the script I use to generate weekly work reports from Trac so that it now shows the amount of change for hours on each ticket (which is set in a “custom” field). And holy cow, I put in 58.5 hours last week. At least 8 of that doesn’t really count, though. I messed my back up and spent some time trying to work while under the influence of cyclobenzaprine which means that I wrote, scrapped and rewrote one class method at least 6 times before finally giving up. (Programming and drugs that make you stupid don’t mix!)

    Aside from work I have been putting some time into a project for a non-profit that is kicking off on May 12th. I’m not allowed to say too much about it ahead of launch, but I can say that it is about raising awareness about ME/CFS and how badly it has been mismanaged and patients marginalized for the past 25 years.

    Finally, I upgraded the WordPress plugin Shorten2Ping which I will continue to pimp as long as it keeps working so well. Of course I like my post tweets to have some hashtag love, so I do a little editing of the shorten2ping.php.

    Here is a diff:

    --- shorten2ping/shorten2ping.php       2010-04-12 10:22:34.000000000 -0700
    +++ shorten2ping.mine/shorten2ping.php  2010-04-19 06:47:58.000000000 -0700
    @@ -119,6 +119,15 @@
         $post_url = get_permalink($post_id);
         $post_title = strip_tags($post->post_title);
    
    +               // add some tag bits here
    +               $tags = wp_get_post_tags($post_id);
    +               $my_tag_list = '';
    +               if (is_array($tags)) {
    +                       foreach ($tags as $j=>$tag) {
    +                               $my_tag_list .= '#'.$tag->slug.' ';
    +                       }
    +               }
    +
         $short_url_exists = get_post_meta($post_id, 'short_url', true);
    
                  if (empty($short_url_exists)) {
    @@ -205,9 +214,19 @@
    
                 //get message from settings and process title and link
                 $message = $s2p_options['message'];
    +                                               $message_bare_char_count = strlen(str_replace(array('[title]','[link]','[tags]'), '', $message));
    +                                               $title_count = strlen($post_title);
    +                                               $link_count = strlen($short_url);
    +                                               $tag_count = strlen($my_tag_list);
    +                                               $over = $message_bare_count + $title_count + $link_count + $tag_count - 140;
    +                                               if ($over > 0 && $over <= $post_title/2) {
    +                                                       // if the overage is more than half the post title then skip it and let tags get truncated
    +                                                       $post_title = substr($post_title, 0, $title_count - $over);
    +                                               }
                 $message = str_replace('[title]', $post_title, $message);
                                  $message = str_replace('[link]', $short_url, $message);
    -
    +                                               $message = str_replace('[tags]', $my_tag_list, $message);
    +
                 if ($s2p_options['ping_service'] == 'pingfm'){
    
                    send_pingfm($pingfm_user_key,$post_id,$message);
    

    (You can download the diff as well.)

    Ooops! Draft Saved at 6:23:27 am. And it is now 8:10, and this would still be a draft if I wasn’t closing browser tabs.

    Gentoo emerge conflicts: SQLite and dev-perl/DBD-SQLite

    February 13th, 2010 by Sjan Evardsson

    I was having issues with my regular update schedule on my Gentoo server where I kept getting the following message:
    ('ebuild', '/', 'dev-db/sqlite-3.6.22-r2', 'merge') conflicts with
    =dev-db/sqlite-3.6.22[extensions] required by ('installed', '/', 'dev-perl/DBD-SQLite-1.29-r2', 'nomerge')

    Since I use SQLite fairly regularly and I like to keep it up to date I figured I would focus on getting that updated, then worry about the Perl SQLite. (Had I known that spamassassin relies on the Perl SQLite I may have been a little more hesitant, but it all worked out ok anyway.)

    Here is how I managed to update both SQLite and the Perl SQLite. I first unmerged dev-perl/DBD-SQLite with:
    emerge --unmerge dev-perl/DBD-SQLite

    I then updated SQLite with:
    emerge -u sqlite

    Which changed the USE settings to “-extensions” which meant that when I tried to emerge DBD-SQLite it failed due to the missing USE requirements. So I took a stab at it and did:
    USE="extensions" emerge sqlite
    Which built cleanly without any problems, and after which a quick
    emerge dev-perl/DBD-SQLite worked great.

    So, in a quick and easy cut and paste format the work-around is:
    emerge --unmerge DBD-SQLite
    emerge -u sqlite
    USE="extensions" emerge sqlite
    emerge DBD-SQLite

    Why the work-around is required I don’t know at the moment as I don’t have the time to dig through the ebuild files and figure out where the issue is, although I am sure if I had waited a bit updated ebuild files will come down the pipeline to correct the issue. (Patience is a virtue, but I have never been all that virtuous.)

    Comparing PHP array_shift to array_pop

    February 5th, 2010 by Sjan Evardsson

    I noticed a note in the PHP documentation about speed differences between array_shift() (pulling the first element off the array) and array_reverse() followed by array_pop() (resulting in the same data, but got to by pulling the last element off the array).

    Since I was working on some code to convert URL pieces to program arguments (like turning /admin/users/1/edit into section=admin, module=users, id=1, action=edit – stuff we tend to do every day) I thought I would take a look at the speed differences since I have always used array_shift() for this (after turning the string into an array via explode()).

    My initial tests showed that array_shift was much faster than array_reverse followed by array_pop, and I wondered why someone would say that in the first place. But then I thought about it for a bit. When using array_shift the entire remaining array has to be re-indexed every call. For a very short array (like the one I was using) this is negligible. When you start looking at much larger arrays, however, this overhead adds up quickly.

    To find out roughly where the break-even point on these two methods lie I whipped up a quick script to run with arrays sized from 10^1 values up to 10^5 values. What I found is that at less than 100 values you are not really gaining much (if anything) by using array_reverse and array_pop versus array_shift. Once you get to the 1000 value array size, however, the differences really add up (as you can see in the logarithmic scaling of the chart below).

    shift_vs_pop.jpg

    The code I used to generate the numbers (which are shown in the chart as averages over 3 runs, rounded to the nearest millionth of a second) is:

    <?php
    $counts = array(10,100,1000,10000,100000);
    foreach ($counts as $len)
    {
    	$m2 = $m1 = array();
    	$x = 1;
    	while ($x <= $len)
    	{
    		$m2[] = $m1[] = $x;
    		$x++;
    	}
    	echo "Timing with array_shift() for $len items\n";
    	echo "000000";
    	$s1 = microtime(true);
    	while (!empty($m1))
    	{
    		$tmp = array_shift($m1);
    		if ($tmp % 10 == 0)
    		{
    			echo chr(8),chr(8),chr(8),chr(8),chr(8),chr(8);
    			echo str_pad(''.$tmp,6,'0',STR_PAD_LEFT);
    		}
    	}
    	$s2 = microtime(true);
    	echo "\nTook ",$s2 - $s1," seconds\n";
    	
    	echo "Timing with array_reverse and array_pop() for $len items\n";
    	$s1 = microtime(true);
    	$m2 = array_reverse($m2);
    	while (!empty($m2))
    	{
    		$tmp = array_pop($m2);
    		if ($tmp % 10 == 0)
    		{
    			echo chr(8),chr(8),chr(8),chr(8),chr(8),chr(8);
    			echo str_pad(''.$tmp,6,'0',STR_PAD_LEFT);
    		}
    	}
    	$s2 = microtime(true);
    	echo "\nTook ",$s2 - $s1," seconds\n";
    	echo "\n";
    }
    ?>
    

    Cisco search patent: my concerns

    December 31st, 2009 by Sjan Evardsson

    An article yesterday at bnet.com about Cisco’s patent filing for search has me concerned. Instead of relying on crawling links (and obeying robots.txt) like current search engines do (or at least should), Cisco’s idea is to look into packets at the network level and pull apart network traffic to discover HTTP requests. While that may not sound so terrible, I can see a need to change the way I do some business.

    I often have development work, intended for collaboration with clients that is wholly not discoverable via web crawling. It is not that there are any great secrets there (unless the client is particular about not letting anyone know what their new site will look like before it goes live) but it is not meant to be permanent, either. This means that unless you know the full URL to the documents in question you are not likely to find them. These URLs are emailed to the client so they can click on the link in their email and let me know which parts of the app work the way they want, what doesn’t work, UI changes they would like to make, etc. With the standard web-crawlers these pages will never show up in a search listing.

    If a layer three network device is picking those URLs out of traffic it is passing, however, those pages might be indexed, and once indexed, added to search. Now, a week later, when the directory x79q3_zz_rev2 is trashed, there are indexed searches pointing at what will return nothing but 404. Not good for me, not good for the client and not good for the individual doing the search.

    My second concern is one of bandwidth. Yes, I know, there is lots of bandwidth and “everybody is on broadband these days anyway” (I don’t know how many times I hear that). Be that as it may, the “everybody” that is on broadband is not actually everybody, and anything that adds more delay to packet routing only makes the situation worse. And what happens when user A sends a request through their ISP to get an HTTP resource? How many hops does it cross? And how many of those will be running Cisco devices? (Hint: most). How many of those Cisco devices are going to do introspection on that packet to pull out the URL? How long does that take? Now consider how many HTTP requests your browser actually makes when downloading a web page. The page itself, linked CSS files, linked JS and any images (and let’s please not even consider AJAX requests).

    While the idea is novel, I don’t think it is a good idea, and I would actually hope that Cisco gets the patent and sits on it and uses it merely to bludgeon anyone who actually tries to do this.

    Custom Parallels VM icons

    November 24th, 2009 by Sjan Evardsson

    I run a lot of VMs in Parallels. (Currently I am running 7, although not all at once, of course.) I end up with a bunch of red generic Parallels VM alias icons on my desktop. Which means that the usual visual quick clues (color, logos, etc) aren’t there and I have to look at the text underneath. Sometimes I am in a rush and start Windows Server 2008 instead of Windows 7 Pro, or Ubuntu Linux instead of Debian Linux (one is set up as a desktop and one as a server with no X).

    I really wanted some custom icons for those VMs. My solution, (as usual) when it doesn’t exist make it. So, I opened pvs.icns (contained in the Parallels Desktop.app bundle /Applications/Parallels Desktop.app/Contents/Resources/pvs.icns) in Icon Composer.app, selected the 512 x 512 version and copied it to the clipboard. I then pasted that into a new Photoshop document and began editing. I saved each new version as a 512 x 512 pixel png and then dropped them in img2icns.app which converted them to the icns files I needed to customize my VM launchers.

    icon_anim.gifBehold the glory:

    They aren’t perfect, especially the Windows Server 2008, but they are different enough that it is easy to select the right VM in a heartbeat.

    You can download the icns files from http://www.evardsson.com/files/parallels_icons.zip