PECL on OS X Mountain Lion: Quick and dirty

July 29th, 2013 by Sjan Evardsson

Yes, this is pretty simple, but I had to look around for too long to find a solution that didn’t involve homebrew or ports or (even worse) some kind of path manipulation to install PEAR/PECL to MAMP. (No, I do not want to set my bash_profile to use the MAMP PHP over the default. And no, I don’t want to recompile PHP – at least not today and at least not until I want to upgrade the version installed.) I just wanted to install pecl_http to run some tests, and I figured if I didn’t put my notes somewhere I would lose them. So here they are.

Before you begin, you need to have Xcode installed – get it from the app store.

Installing PEAR (which includes PECL) is pretty straight-forward (thanks to Jason McCreary at pureconcepts.net)

Two simple terminal commands, and some configuration:

curl -O http://pear.php.net/go-pear.phar sudo php -d detect_unicode=0 go-pear.phar
sudo php -d detect_unicode=0 go-pear.phar

in the configuration prompt -
Type 1 and then Return
then type:
/usr/local/pear

Type 4 and then return and type
/usr/local/bin

Hit return and you are done (with the first part).

Verify pear with:
pear version

Now, before you run off and type sudo pecl install pecl_http you should know that it will fail, as autoconf is not yet installed. Thankfully, this is quite simple as well, (thanks to This question on serverfault.com)

Download the latest release http://ftp.gnu.org/gnu/autoconf/autoconf-latest.tar.gz

Extract the files and do a normal ./configure; make; sudo make install;

Now you can
sudo pecl install pecl_http

Symfony2 + Propel 1.6 with Memcache

July 4th, 2012 by Sjan Evardsson

I was looking for a way to put the Propel 1.6 “Instance Pool” into Memcache for a Symfony2 project I am working on, and I have managed it. Here is what I did; it may help you, but all the usual caveats apply. (Your mileage may vary. Use only under the direct supervision of your doctor. Do not allow children to use unsupervised. Not to be taken internally.)

What you have:

I am assuming that you currently have Symfony2 set up with Propel 1.6 as your ORM. If not there are guides (http://symfony.com/doc/current/book/installation.html, http://www.propelorm.org/cookbook/symfony2/working-with-symfony2.html) to help you. Do that first, then come back. I’ll wait.

The setup I am using assumes you will have multiple servers behind a load balancer of some sort, along with a memcache server that is accessible to all of them.

This is a sample of such a setup.

What you need in Symfony2:

Now that you have Symfony2 and Propel 1.6 set up, you need to create a new Symfony bundle to handle your Memcache connections. (You will see the need for this soon.)

What I have done is create a bundle which will contain all my shared utilities among all the apps built on this Symfony2 instance. For the sake of this article we will call the bundle “MyApp.” (I am not going to go into detail on how to create a Symfony2 bundle – the Documentation is your friend.)

Once we have created a bundle and registered it with app/AppKernel.php by adding

new MyApp\MyAppBundle\MyAppBundle(),

in the $bundles array under registerBundles(), we need to create a class to read configurations from the app/config directory (config_dev.yml, config_test.yml and config_prod.yml) so that our Memcache settings can be configured by environment. Note that we are not adding any routes for this bundle, it is used for utility stuff only, not for routable pages.

For this we will also need to make sure that we have MyAppExtension and Configuration in place in src/MyApp/MyAppBundle/DependencyInjection.

This gives us access to anything in the config_XXX.yml files under the “my_app” node.

The configuration class we will call “MyAppConfiguration” and will place this in the src/MyApp directory.

Now that we have a class that can read our own special configurations we need to add a class to handle the Memcache connection.

For this example we will create this class as Poolcache\CacheHandler. In order to do that we add a directory in src/MyApp called Poolcache and create the CacheHandler.php there. This means we can now set up the Memcache server locations based on the environment. For example, in config_dev.yml you might include:

my_app:
    cache:
        server: 127.0.0.1
        port:   11211
        expire: 120

While in config_prod.yml you might use:

my_app:
    cache:
        server: mymemcache.mydomain.dom
        port:   11211
        expire: 3600

(All the sample files can be downloaded here.)

Creating the Propel behavior:

Now that we have Symfony2 set up the way we need, we can add the Propel behavior. In the vendor/propel/generator/lib/behavior directory we will be adding MemcachedPoolBehavior.php.

This class uses the parser to replace some of the code in the generated classes.

<?php
require_once __DIR__.'/../util/PropelPHPParser.php';
class MemcachedPoolBehavior extends Behavior
{

    /**
     * Filter to add the CacheHandler class to the Peer objects so they
     * can use Memcached or whatever other cache you want to use
     */
    public function peerFilter(&$script)
    {
        $keyname = $this->getTable()->getPhpName();
        $newAddInstanceToPool = "
    public static function addInstanceToPool(\$obj, \$key = null)
    {
        if (Propel::isInstancePoolingEnabled()) {
            if (\$key === null) {
                \$key = (string) \$obj->getId();
            } // if key === null
            \$cache = \\MyApp\\Poolcache\\CacheHandler::getInstance();
            \$pool = \$cache->getPool('%s');
            \$pool[\$key] = \$obj;
            \$cache->setPool('%s', \$pool);
            /*self::\$instances[\$key] = \$obj;*/
        }
    }
    ";
        $newAddInstanceToPool = sprintf($newAddInstanceToPool, $keyname, $keyname);
        $parser = new PropelPHPParser($script, true);
        $parser->replaceMethod('addInstanceToPool', $newAddInstanceToPool);
        $script = $parser->getCode();

        $newRemoveInstanceFromPool = "
    public static function removeInstanceFromPool(\$value)
    {
        if (Propel::isInstancePoolingEnabled() && \$value !== null) {
            if (is_object(\$value) && \$value instanceof %s) {
                \$key = (string) \$value->getId();
            } elseif (is_scalar(\$value)) {
                // assume we've been passed a primary key
                \$key = (string) \$value;
            } else {
                \$e = new PropelException(\"Invalid value passed to removeInstanceFromPool().
                    Expected primary key or %s object; got \" .
                    (is_object(\$value) ? get_class(\$value) . ' object.' : var_export(\$value,true)));
                throw \$e;
            }
            \$cache = \\MyApp\\Poolcache\\CacheHandler::getInstance();
            \$pool = \$cache->getPool('%s');
            unset(\$pool[\$key]);
            \$cache->setPool('%s', \$pool);
            /*unset(self::\$instances[\$key]);*/
        }
    }
    ";

        $newRemoveInstanceFromPool = sprintf($newRemoveInstanceFromPool, $keyname, $keyname, $keyname, $keyname);
        //$parser = new PropelPHPParser($script, true);
        $parser->replaceMethod('removeInstanceFromPool', $newRemoveInstanceFromPool);
        $script = $parser->getCode();

        $newGetInstanceFromPool = "
    public static function getInstanceFromPool(\$key)
    {
        if (Propel::isInstancePoolingEnabled()) {
            \$cache = \\MyApp\\Poolcache\\CacheHandler::getInstance();
            \$pool = \$cache->getPool('%s');
            if (isset(\$pool[\$key])) {
                return \$pool[\$key];
            }
        }
        return null; // just to be explicit
    }
    ";

        $newGetInstanceFromPool = sprintf($newGetInstanceFromPool, $keyname);
        //$parser = new PropelPHPParser($script, true);
        $parser->replaceMethod('getInstanceFromPool', $newGetInstanceFromPool);
        $script = $parser->getCode();

        $newClearInstancePool = "
    public static function clearInstancePool()
    {
        \$cache = \\MyApp\\Poolcache\\CacheHandler::getInstance();
        \$cache->clearPool('%s');
        /*self::\$instances = array();*/
    }
    ";

        $newClearInstancePool = sprintf($newClearInstancePool, $keyname);
        //$parser = new PropelPHPParser($script, true);
        $parser->replaceMethod('clearInstancePool', $newClearInstancePool);
        $script = $parser->getCode();
    }

}

Notice that we are calling the static getInstance method on the cache handler class we created earlier, and using that to move the instancePool from the classes static properties to the memcached server.

Final steps:

Back in Symfony2, add the following to app/config/propel.ini:

#memcaching instance pool
propel.behavior.memcachedpool.class = behavior.MemcachedPoolBehavior

And in your Symfony2 application bundles, when you set your Resources/config/schema.xml for Propel you can add <behavior name="memcachedpool" /> to place the instance pool into memcache. Like so:

<?xml version="1.0" encoding="UTF-8"?>
<database name="default" namespace="Acme\HelloBundle\Model" defaultIdMethod="native">

    <table name="book">
        <column name="id" type="integer" required="true" primaryKey="true" autoIncrement="true" />
        <column name="title" type="varchar" primaryString="1" size="100" />
        <column name="ISBN" type="varchar" size="20" />
        <column name="author_id" type="integer" />
        <foreign-key foreignTable="author">
            <reference local="author_id" foreign="id" />
        </foreign-key>
        <behavior name="memcachedpool" />
    </table>

    <table name="author">
        <column name="id" type="integer" required="true" primaryKey="true" autoIncrement="true" />
        <column name="first_name" type="varchar" size="100" />
        <column name="last_name" type="varchar" size="100" />
        <behavior name="memcachedpool" />
    </table>

</database>

This means you can pick and choose which classes are maintained in Memcache as well, so that not every class needs to be there.

*Crickets* …

June 6th, 2012 by Sjan Evardsson

Ok, six months, no posts. It isn’t that I have been busy, just that I have been busy.

I am currently working on a project that will require using memcached (to store sessions and model data) for a Symfony2 with Propel 1.6 project. When I get it all worked out I will let you know how to do the same.

Propel has an “instancePool” that acts like a model cache, except that it is stored in a static property on the model peer class. This works great for a single server setup with APC, but not so well for sharing the pool between multiple servers by sharing a memcache server between them. I am thinking that I will probably need to use a Propel Behavior to modify the peer classes at generation time. But I am just getting started down that road, and this is not a tech post, more of a “reports of my death are greatly exaggerated” post.

72 Hours

January 23rd, 2012 by Sjan Evardsson

Saturday, 9:00 am, Jan 21
I am writing this the old-fashioned way, pencil on paper by daylight. I will, of course, transcribe this later to a digital medium, add some of the photos I took, and post it online. But for now it is just me, a notebook, and a pencil in front of the wood stove. It is currently Saturday, at 9:00 am, approximately 37 hours after the power went out. Of course, that wasn’t really the start of the story, just a point along the way.

The story really started on Monday, Martin Luther King, Jr Day, and a federal holiday. Not a holiday for me, though, as I jumped on line for a normal telecommute day. During the day the weather turned ugly and rain turned to snow as the temperature dropped. With the forecast for even more snow overnight I knew that trying to make the trek to Seattle would be dangerous (snow on ice with lots of inexperienced snow commuters.) The forecast called for warming and rain on Wednesday, though. “Great!” I thought. “I’ll go in to Seattle on Thursday and Friday this week rather than Tuesday and Thursday.” I notified my boss, problem solved.

Tuesday came around and the roads were even worse than I had expected. The garbage and recycling never came, and neither did the mail. (So much for “Neither snow nor rain nor …“) That’s okay, by Tuesday afternoon the forecast was still for warming and rain on Wednesday.

IMG_0391

Unfortunately that wasn’t the way it happened. The warm wet front came over us, all right, but it slid over the top of the cold arctic front that was already here. And what happens when rain falls through a cold air pack like that? Correct! More snow. We ended up tying the 100 year record for 12 hour snowfall and in the top 5 100 year totals for 24 hour snowfall. We got 14 inches of snow in what seemed like no time flat. This was the point where my boss was kind enough (and smart enough) to tell me not to make the trip in to Seattle this week. It didn’t make me feel any less guilty, though, and I poured on the hours as a result. (I really need to work on these guilt issues.)

Had it just stopped there, or had the temperature risen five or six degrees, we would have been fine. The weather had other plans for us, however. It seems the convergent fronts formed an alliance with the goal of wiping out all the trees. Really, I don’t know what the trees did to deserve it, but I would have to guess that the politics of trees and clouds are not entirely un-messy. After dumping 14 inches of snow on the trees the Weather Alliance unleashed its secret weapon against the Tree Union: freezing rain (also known as an “ice storm.”) This does two things to the snow already on the trees. First, it loads it with moisture, increasing its weight immensely. Second, it causes the moisture-laden snow to freeze to the growing sheath of ice around the branches, holding them in an icy grip.

IMG_0392

As the ice grew Wednesday night and on into Thursday the trees began to take heavy casualties. There was a steady beat of “CRACK! THUD!” every few seconds as another branch was torn from its tree and crashed to earth with its heavy load of ice and snow. By midday Thursday one of these beats of destruction was happening every 10 to 15 seconds. And as the day wore on the branches crashing down were larger and larger. The weaker, smaller branches came down first, while heavier branches, and even the tops of some of the trees held on as long as they could, before they, too, were doomed. One particularly loud (and close) crash was the sound of our 65 foot tall cedrus losing a 30 foot long lower main branch.

IMG_0395

By Thursday evening the sounds of crashing limbs became somewhat “normal,” the huge branch that crashed down on the neighbor’s roof notwithstanding. I had finished work for the day, did a little studying and was unwinding by playing a little Spore. (Ok, I like the game, so sue me.) It was then that the Weather Alliance’s attack on the Tree Union caused collateral damage. The power went out. I had the modem and router on a UPS, and by shutting down the server quickly was able to maintain an internet connection for another four hours or so. I was pretty well convinced that the power would be back on by morning, we had a fire going in the wood stove already and we dug out the camping lanterns. I set the alarm on my phone and went to sleep.

IMG_0404

When my phone woke me in the morning I knew immediately that the power was still out. My small bedside lamp had been on when the power went out and I hadn’t turned it off, yet it was still pitch dark.

IMG_0408

IMG_0405

I checked the weather on my phone at 6:08 am. (The reason I know the time so precisely will be become clear later.) I brushed my teeth, got dressed and then checked on the fire. Once I was sure the fire was going well I composed an email to work, hit send, and shut down the screen (like always.) While I dug the camp stove out of the garage (one must have coffee, after all) I kept having this feeling that something was missing. I mad a pot of coffee (with the last ground coffee we had on hand) and watched the fire.

IMG_0420

Now I was looking at the first real challenge of the day – branches were still coming down and we had no more firewood by the house. We had plenty in the wood pile, but getting there would require traversing directly under several heavily-laden branches which could come down at any moment. Getting hit by one of those would be painful at best, and possibly even fatal. (There was one tree-top already impaled in the ground just 20 feet from the deck.)

IMG_0401

While waiting for the weather to warm up some (and hopefully relieve the trees of their burden) I took a short diversion – a trip to the store to look for D-cell batteries for our lantern and to pick up some food that didn’t require cooking, and (perhaps most importantly) some ground coffee. (Note: I really need to get a hand-powered coffee grinder.) While the roads in our neighborhood were in horrendous shape, the main roads were plowed and mostly just wet with a few patches of ice and slush left by this time (around 1:00 pm). It seemed that no one in the area had any D-cell batteries left though. Even the big-box hardware store was sold out of them. I had to be satisfied with some ground coffee, some food items and some C-cell batteries. (Ask me some other time about how to rig a C-cell battery to run a D-cell device.)

I continued to check the weather to see if any relief was in sight. Oddly, despite the fact that if felt warmer outside, the weather report on my phone didn’t change. Somewhere around the tenth time I checked it I noticed the “last updated” notice: Friday, 20 January, 6:08AM.” That’s why it hadn’t changed. I wasn’t getting data at all. In fact, no email all day? How likely is that? Now I realized what had been missing in the morning, no “mail sent” sound. I tried again to email work but it wasn’t getting through. I couldn’t even load the Google main page, it just kept saying “The server quit responding.” I tried sending a text message to one of my coworkers. It looked like it went through, but I got no response, so had no way of telling. And in my home, where I usually get four or five bars of reception I was only getting one. I managed to talk to my in-laws briefly, but there was a great deal of static and the call would drop if wasn’t careful about how and where I moved.

IMG_0423

By late Friday afternoon the arctic front had apparently decided it was done with the offensive against the Tree Union and pulled out of the Weather Alliance. The warming air combined with the rain had melted a good portion of the snow and ice off the trees and large chunks of were falling out of the trees without taking the limbs with them. I braved the falling ice around 3:00 pm and brought some more wood up to the house. Now, except for a battery shortage we were set for another night.

Saturday 2:45 pm
I started the morning with a fresh pot of coffee (of course), reset the fire, brought more wood up to the house, and dragged the limbs that had fallen out of our trees into the street out of the street. The huge cedrus branch is too large to move, so I cut the ends off to at least get it out of the street. I was planning on going to Costco, still in search of D-cell batteries, but needed something to do keep myself occupied.

IMG_0436

I decided that I needed to do something about wanting to tell the story of the ice storm around here. Something other than just posting a bunch of pictures. So I began writing this morning. At around 10:00 am this morning I took a break from writing and went to Costco in search of D-cell batteries. Score! I found some. When I got back home my wife suggested we go out for a hot meal. We went to the Martin Way Diner (perhaps the best fries in Olympia) where I was finally able to get some data on my phone. The continued weather forecast looks promising (not that that means anything) and PSE says they currently have 900 people working on getting the power back on. The number of homes and businesses without power at that time was still at around 130,000, down from the initial 300,000+. Their map of our area showed green diamonds (repairs done) all around our neighborhood with only 2 red squares (still being assessed) in the area. As we drove home we were noticing lots of porch lights on in areas that had been without power when we left. Pulling in to our neighborhood, though, we saw that we were still without power.

Sunday, 8:30 am
Yesterday evening I made a quick trip to the grocery store for ice. While our chest freezer is still quite cold (still well below freezing) our refrigerator is not. While I was picking up ice my wife removed all the spoiled food from the refrigerator and packed the rest into our two coolers. When I returned with the ice we packed that in there as well.

I made a of jug of iced tea last night as well. Cooling it off once it was brewed was not a problem. I set it in the snow on the deck and then packed more snow around it. And there it will stay, nice and cold, until the power comes back or the snow melts. I am hoping for the former to come first. In fact, any minute now would be good. It has currently been more than 60 hours since the power went out. I would like to get a start on cleaning up the branches, but with no power (not to mention the wet conditions) my hands are tied. I am beginning to think that getting an electric chainsaw may not have been the best choice. Well, not really. Since our electric here comes from hydro (when it comes, that is) using electric tools (like the chainsaw and the electric mower) is just more environmentally sound.

If we still have no power this evening we may be staying with my in-laws (they have a generator) and tomorrow I will make the trek in to Seattle to work in the office. I really hope it doesn’t come to that, as I don’t like the idea of leaving my wife at home with no communications.

Sunday, 10:15 am
It turns out that one of our coolers is utterly worthless. The ice has almost completely melted and it leaked all over the floor. While it is fine for road trips it is certainly not up to the storage task. I managed to get the latest info from PSE on when the power in this area might be back on. It looks like late Wednesday night now. That sucks the big one. So, the plan for today:

  • Bring more wood up to the house
  • Go get a phone that doesn’t require external power
  • Get more ice
  • Look for a better cooler (another “extreme” cooler like the one that is working well would be good)
  • Find an open laundromat so I have some clean clothes
  • Head to the in-laws to at least shower, if not to spend the night

I am not sure what I will do Monday and Tuesday night, since we can’t really leave the house alone for too long. The cats need to stay warm too and they haven’t figured out how to use the wood stove yet. (That will probably come right after they evolve thumbs.)

Sunday, 8:16 pm
While at the store picking up a new cooler and ice (and striking out on a non-powered phone) I got a call from my in-laws. It seems that their power came back on last night. I made the arrangements to head over there for the day to do laundry and use their shower and went home. Once home I packed up the laundry, we grabbed the laptops, my phone and my wife’s Kindle and we headed over to the in-laws. After being stuffed on delicious soup and sandwiches, charging our devices, washing our laundry, and getting a nice hot shower we headed back home to tend the fire and the cats, and so I could get everything I needed to take to work together.

After getting things situated around the house (putting away laundry by flashlight is, shall we say, a little more challenging than usual) my wife realized she didn’t download the books she wanted while we were at the in-laws house. She suggested we take a short trip to the store to look for a new laptop bag (hers is a hair too small for her laptop) and then we could head out to somewhere with wireless for a cup of coffee or dinner or something so she could connect and download the next books in the series she is reading. We didn’t find a large enough laptop bag she liked, so we headed back to the Martin Way Diner, where I know they have free wireless.

While there we had a light dinner and  failed to connect her Kindle to the wireless, although my phone connected just fine. We finally gave up and headed home. Once home we re-stoked the fire and were just getting ready to settle in to “no electricity tonight” mode when the lights came back on at a couple minutes before 8:00 pm – 72 hours after they had gone out. We were ecstatic, of course, and the first thing I did was to get the modem and router turned on and the server booted up. Once that was all well I began transcribing this, while letting my photos upload to my flickr account.

How intranet software goes to hell

December 25th, 2011 by Sjan Evardsson

We have all seen it, many of us have tried to clean it up, and a few of us may have even been responsible for some of the worst written, non-documented, buggy, spaghetti-like code ever – “internal use only” apps. These are apps that are meant to simplify the jobs of your co-workers, meant to automate repetitive tasks and meant to be a means for managing the company’s business. So how do they end up so terrible? You’ve got the hottest, leanest, cleanest code on the public facing side, so you obviously have the talent in-house to make good software. (And let’s be honest, all of us think our own software is the best, because if we didn’t we would die of shame whenever anyone asked where we worked.)

So how does the software we build for ourselves go so wrong? Well, in my observations through many jobs over many years, I have come up with a formula for really lousy internal software.

Step one: Start small
By start small I don’t mean start with a single database with 4 or 5 tables and a couple views and a few report generation scripts. I mean start really, really small. Like “put a page on the intranet that lists all our vendors and their current status.”

By starting with such a small task it is easier to forgo any sort of documentation, architecture planning or requirements specifications. Its also easier to convince yourself that this is unimportant. After all, this is merely a convenience for your fellow workers and not an integral part of the revenue stream. This is the first step on the road to ruin.

Step two: Occasionally add a feature, not too much at once
It is important at this early stage in the gestation of your beastly code that you keep feature adds at least as small as the original task. By not having anything “worthy” of architecture or specification you can guarantee the continued growth of your new monster. These should be things like “Can we also show the vendor’s contact info on that list?” followed a month later by “can we filter the list to only show active vendors?” These changes should not only be small, but should be spaced far enough apart that the developer involved has forgotten about the changes that came before, or at least how many there were.

Step three: Repeat steps one and two, several times
Now that you have a minor little thing here, it is time to add some more. This time, let’s do the same thing, but for, say clients. Because you already have the basics it is a perfect time for some copy and paste development. Change the query, but don’t bother with changing variable names or anything. After all, you already know it works, just use it as is with some text label changes on the output. Easy-peasy and took you about five minutes. At this rate, you could just as quickly add the same sort of thing for employees. And any other sort of list that comes up.

Step four: Time for a big change
Now it becomes time to turn all your “unrelated” (although code-copied) little, unimportant, non-revenue stream items into one full-fledged app. Since you are already convinced that none of this is very important, and most of it is already built and functioning, it is easy to convince yourself that turning this into one contact management app is a small enough task to not need architecture, requirements or even any real documentation. This is generally where the real shape of the beast starts to take form. Now your query and display scripts will need to be able to insert, update and delete, and your one display will need to be diversified into display, edit forms, and perhaps a login page to insure the person using the forms has permissions to edit or delete.

If you really want to do it up right, instead of turning it into the obvious (in this case a contact management app) turn it into something close, but not quite the same. Say, an inventory and order management app. Hey, we already have the client and vendor info, we’re more than halfway there, right?

Step five: The final chapter
The last step is perhaps the easiest. Once the monstrosity is running on your intranet and working (however badly), ignore it. Requests for bug fixes go to the bottom of the queue as it is, after all, not part of your revenue stream. Developer time is better spent on your customer-facing apps and there really is no need to make it work completely, because “we got along fine without it before it was built.”

Of course, during this waiting period the app that horror built becomes a routine part of the workflow of those who use it regularly and they pretty well can’t do their job without it any more.

How to avoid it altogether, in one simple step:
In my experience, the simplest way to avoid these kinds of nightmare creatures of code, is to require a full architecture, specification and documentation cycle for even the simplest little things. You are likely to find that even though you were only asked for a vendor list, what your co-workers really need is far beyond that. Of course, you will only bother with treating it like any other development cycle if you can see the project as an important part of your business, and as having impact on the revenue stream. If it seems too small to bother with treating it like a full project, then either the requester has failed to make clear its importance, or it really is something that should not even be taken on.

Windows 7 – My Take

October 15th, 2011 by Sjan Evardsson

Having just completed a Microsoft certification (MCTS 073-680) I have learned more about Windows 7 (and some about Server 2008 R2) than I have in over a year of using it. To be fair, I do not use Windows 7 as my primary platform, but I do use it in a VM on a fairly regular basis. For the most part I pretty well like Windows 7, at least as far as Windows goes. But that is not the primary point of this post. I would like to point out what I feel are some security-related pros and cons of some new (and some not-so-new) features in Windows 7 and Server 2008.

BranchCache: In a typical main office / branch office setup with a file server in the main office, every time a user in the branch office opens a file from the file server (in the main office) it travels across the WAN link. This is not only a waste of limited bandwidth, but it is slow, leading to things like users copying files to their local machine, grabbing copies of several files onto a thumb drive while in the main office and even (I have seen this), emailing the file to their private (non-work) account. BranchCache helps out here, with only one copy of any file accessed going across the WAN to be cached in the branch office (thus the name). Every time the file is opened after that in the branch office it is opened from the local copy. The only time files are transferred across the WAN again are when they are modified on either end.

  • Pros:
    • Removes the need for users to come up with “creative” ways to get copies of files from the main office to work on.
    • Files are encrypted in transport.
    • Only one copy of any file is ever cached at the branch office, and it is kept up-to-date with the version at the main office.
  • Cons:
    • Requires Server 2008 R2 at the main office, with Active Directory and Certificate Services.
    • Using “Hosted” BranchCache (where the cache is held on a server in the branch office) requires Server 2008 R2 with Active Directory and Certificate Services at the branch location as well.
    • Using “Distributed” BranchCache (where the cache is held on the peer user machines in the branch office) can lead to more trips across the WAN for the files, since whenever a machine is powered down or unplugged from the network part of the cache goes down with it.

BitLocker: Full-drive encryption. Sweet! But … ?

  • Pros:
    • With a TPM the entire drive can be encrypted.
    • With a TPM, removing the drive and placing it in another machine means it will not boot without the presence of a recovery key.
    • Can require a USB key and password to boot.
  • Cons:
    • Without a TPM the drive cannot be locked to a particular boot environment.
    • With a TPM BitLocker can be configured to boot with nothing more required than the TPM itself. If you set it up this way, why bother? The machine will boot and the drive’s contents will be available regardless.
    • Only available for Windows 7 Enterprise or Ultimate editions.

Network Access Protection: This is a really good idea, and not limited to Server 2008 R2, but around since Server 2008. NAP allows a server to check connecting clients (either through VPN or DirectAccess) to make sure they are up-to-date with OS patches, have the proper version and patch level for software and anti-virus and are not running anything blocked through Group Policy Application Security settings.

  • Pros:
    • Computers that do not pass the NAP requirements can be shunted off to a quarantine network where the needed updates can be pushed to the computer before they are allowed to connect to the internal network.
    • NAP can enforce Application Security policies, and can keep remote users up-to-date with the patches and application versions used in the internal networks.
  • Cons:
    • NAP requires that connecting computers have the proper settings in their local Group Policy Object to allow DHCP or IPSec NAP Enforcement, which can make implementation difficult if they are not connected internally first, to get those Group Policy settings pushed to them.
    • NAP is likely to make some users unhappy when they cannot simply log on to the VPN and start to work, but instead are forced to wait for updates. This could cause the sort of push-back that makes admins likely to scrap these sorts of setups.

DirectAccess: I am so unsure about this one. The definition from TechNet:

DirectAccess allows remote users to securely access internal network file shares, Web sites, and applications without connecting to a virtual private network (VPN). ... DirectAccess establishes bi-directional connectivity with an internal network every time a DirectAccess-enabled computer connects to the Internet, even before the user logs on. Users never have to think about connecting to the internal network and IT administrators can manage remote computers outside the office, even when the computers are not connected to the VPN.

  • Pros:
    • DirectAccess connects via ports 80 and 443, meaning that it works from within most firewalls (in hotels, coffee shops, airports, etc).
    • Even when connecting via port 80 all DirectAccess communications are encrypted.
    • Bi-directional access means that admins in the internal network can access the connected machine as if it was in the internal network to push out Group Policy changes, provide remote assistance, etc.
  • Cons:
    • DirectAccess connects before the user even logs on. This means that if the machine is on and has internet connectivity it is connected to the internal network.
    • Since it does not require the user to take any action to connect (like connecting to a VPN) the user is less likely to be aware that anything they download (like this “really cool Java game”) also has access to the internal network.

A scenario: Company A and Company B both have Windows networks with Server 2008 R2 and traveling users with Windows 7 Enterprise laptops with a TPM. Both companies have set the laptops up with BitLocker full drive encryption and boot protection. Both companies have set the laptops up with DirectAccess. Company A is quite a bit stricter than Company B, however. Company B’s laptops are set to boot automatically without a USB key or password, while Company A requires both. Further, the local Group Policy security settings on Company A’s laptops will log the user off and shut down the computer if the USB key is removed. Company A has gone a step further in implementing NAP to ensure that all their traveling computers are always up-to-date.

While User A (from Company A) and User B (from Company B) are having drinks in an airport lounge their laptops are stolen. Both User A and User B think p@ssw0rd is a good enough password. The thief opens the laptop from Company A and cannot boot it without the USB key which is in User A’s pocket. The Company A laptop is only as useful to the thief as the hardware. The Company B laptop, however, will boot (automatically decrypting the drive) and will also connect to Company B’s internal network. A couple guesses later the thief is logged on as the laptop’s user and connected to Company B’s internal network with all the access that the user would have were they plugged in locally.

There is room to implement a great deal of security in Windows 7, but there is also a lot of room to totally mess it up. As I said earlier, I am not sure that DirectAccess is such a good idea, but I guess it depends on how the rest of the system is configured and how well users are educated.

Data breach for Kroger

April 2nd, 2011 by Sjan Evardsson

Just got an email today from Kroger saying that they had suffered a data breach and to (essentially) watch out for spam. The text of the message:

Kroger wants you to know that the data base with our customers’ names and email addresses has been breached by someone outside of the company. This data base contains the names and email addresses of customers who voluntarily provided their names and email addresses to Kroger. We want to assure you that the only information that was obtained was your name and email address. As a result, it is possible you may receive some spam email messages. We apologize for any inconvenience.

Kroger wants to remind you not to open emails from senders you do not know. Also, Kroger would never ask you to email personal information such as credit card numbers or social security numbers. If you receive such a request, it did not come from Kroger and should be deleted.

If you have concerns, you are welcome to call Kroger’s customer service center at 1-800-Krogers (1-800-576-4377).

Sincerely,

The Kroger Family of Stores

And now, why I am not in the least concerned.

  1. Kroger is the parent company of 29 supermarket, warehouse, discount grocery and convenience store chains, 4 jewelry store chains and 3 financial services companies. I have a “rewards card” type account at one of those 29 grocery-type places that links my email address with my name. However, I do not have an online account with any of them. (I don’t see the need to create yet another account to “log in” to the web site of a store down the street to print the same coupons they send me in email and physical mail.)
  2. I do not have any payment methods tied to that account (obviously, as I have no “online account” with them.)
  3. When I am sent details of my coupons and money-back rewards I get those via email with a link to view them. Sure, someone sniffing on the wire could get the link and print out my money-back certificates. But they are tied to the physical “rewards card” I have with the store, so they don’t really do anyone else any good unless they clone my card.

So, even though I am not particularly worried about this data breach (especially since my real name is tied to that email address in lots of publicly available places on line) I do have to give Kroger credit for informing their customers. Now I am just hoping they release a little more information about how it happened, what steps they took, etc. Are you listening, Kroger? Thanks.

Edit: @Tekneek pointed me to this article by Brian Krebs. According to Krebs’ article it looks like their email marketing service provider Epsilon was breached.

WordPress support fail: WP-Stats and JetPack

March 19th, 2011 by Sjan Evardsson

Apparently I am not the only person running in to the message “Your WordPress.com account, [account name] is not authorized to view the stats of this blog”.

There is a work-around posted at a couple places, but this did not work for me.

There seems to be no official word from WP yet, at least not that I can find. There are a lot of recommendations to “just remove WP-Stats and install JetPack” – however, that does not seem to work for me either. Instead I get “You do not have sufficient permissions to access this page.”

If WP had planned to replace WP-Stats with JetPack, shouldn’t they have let everyone know? And if they had planned to modify WP-Stats, shouldn’t they have first tested those changes to make sure it didn’t break existing installations? And if they are having unexpected issues shouldn’t they be A) working diligently to correct them and B) assigning someone to respond in the support forums?

Blogaversary? 5 years …

January 12th, 2011 by Sjan Evardsson

I know there are several people who like to keep track of these kinds of things, their annual blog anniversary (“blogaversary”) and such, and I hadn’t given it much thought. At least until I realized that this January marks 5 years I have been blogging, some years more than others. As I am currently not only working full-time+ (45 – 60 hours/week) but am also enrolled in school full time (better late than never?) things have been slow around here.

Development on SPDO languishes due to the same concerns and even my knitting has been put on hold for several months now. Such is life. Anyway, five years now. Maybe I’ll bring it up again in another five years time. Maybe not. There is one thing that has been bothering me, though. The state of the blog itself. It could use a face-lift. Meh, it’ll hold for a while longer. Maybe next year …

Simplified ANSI color term support in PHP

December 23rd, 2010 by Sjan Evardsson

I was working on a script that needed some color terminal output and while it wasn’t particularly complicated, I found it was slowing me down. Flipping back and forth between a list of ANSI color codes and my work was frustrating. So, I did what I am often prone to do, I did a quick Google search for a PHP ANSI color terminal library. I found some things that were old, not maintained and not really fitting what I needed. So then, I did what I always end up doing in that situation, I built one.

The ANSI class is a way to quickly create several different foreground and background color combos along with a few style effects (like underline, inverse, and if you really must, blink). Of course the style effects only work on the standard 16 ANSI VT-100 terminal colors (the normal and “bold” or “bright” versions of black, red, green, yellow, blue, purple, cyan and white.)

The simplest way to use it is to create a new ANSI object for each color combo you want. So if you want red text on a white background, underlined bright green text on a black background and blue text on a yellow background, you could create three objects like so:

include_once('ansi.class.php');
$red_white = new ANSI(ANSI::RED, ANSI::WHITE);
$bright_green_black = new ANSI(ANSI::GREEN, ANSI::BLACK, array(ANSI::BRIGHT, ANSI::UNDERLINE));
$blue_yellow = new ANSI('blue', 'yellow');

Notice that I used a couple different ways of setting the colors, the class constant ints and strings. The effects are set in an array since you can chain multiple effects on a single color scheme (until you get into the extended color space, more on that in a minute.) Once you have these objects, styling your terminal output is simple.

$red_white->p("This is red on a white background, and prints no newline.");
$bright_green_black->p("This is bright green on a black background and prints no newline.");
$blue_yellow->pline("This is blue on a yellow background and will print a newline character.");
$red_white->setInverse(true);
$red_white->pline("This is now white on a red background.");

The p() and pline() methods will spit out the correct escape sequence and color codes to style and color the text, then print the text, then spit out the correct escape sequence and color code to “reset” the term to its default. This means no running a script that displays a warning then leaves your terminal bright yellow text on a red background.

So now that the standard color space is taken care of, how about a little love for the xterm 256 color space? Simple enough. Any int value passed to the color arguments of the constructor greater than 7 will automatically invoke the 256 color space. The first 16 colors (0 – 15) are just the default terminal colors, of course, but colors 16 – 231 are the extended color space, with 24 greyscale values from colors 232 – 255. So how do we know what color is what? Well, we can either call one of the static functions to view the color space (ANSI::showForegroundColors(), ANSI::showBackgroundColors()) or we can pass in a value from the static ANSI::rgb($r, $g, $b) function, which takes, you guessed it, three integer values from 0 – 255. While ANSI::rgb() tries to get to the closest color in the color space it still needs work. The very simplistic manner in which it is currently implemented is not the most accurate. It is on my to-do list somewhere, though.

$grey_gold = new ANSI(ANSI::rgb(31, 31, 31), ANSI::rgb(204, 153, 0));
$grey_gold->pline("This is grey text on a gold background.");
// effects don't work in the extended color space, except for inverse
$grey_gold->setInverse(true);
$grey_gold->pline("This is gold text on a grey background");

If you know of any way to apply the ANSI styles (underline, blink, inverse) in conjunction with the extended color space leave a comment to let me know. If you think the script could use some extra functionality do the same.

It is not incredibly clever or full-featured or any of those sorts of things, but it does what I needed it to do. If you would like to, you can download a copy a copy of the ANSI class (ansi.class.php.zip) – it is released under the MIT license, and is free to use, copy, distribute, etc etc.