Cisco search patent: my concerns

December 31st, 2009 by Sjan Evardsson

An article yesterday at bnet.com about Cisco’s patent filing for search has me concerned. Instead of relying on crawling links (and obeying robots.txt) like current search engines do (or at least should), Cisco’s idea is to look into packets at the network level and pull apart network traffic to discover HTTP requests. While that may not sound so terrible, I can see a need to change the way I do some business.

I often have development work, intended for collaboration with clients that is wholly not discoverable via web crawling. It is not that there are any great secrets there (unless the client is particular about not letting anyone know what their new site will look like before it goes live) but it is not meant to be permanent, either. This means that unless you know the full URL to the documents in question you are not likely to find them. These URLs are emailed to the client so they can click on the link in their email and let me know which parts of the app work the way they want, what doesn’t work, UI changes they would like to make, etc. With the standard web-crawlers these pages will never show up in a search listing.

If a layer three network device is picking those URLs out of traffic it is passing, however, those pages might be indexed, and once indexed, added to search. Now, a week later, when the directory x79q3_zz_rev2 is trashed, there are indexed searches pointing at what will return nothing but 404. Not good for me, not good for the client and not good for the individual doing the search.

My second concern is one of bandwidth. Yes, I know, there is lots of bandwidth and “everybody is on broadband these days anyway” (I don’t know how many times I hear that). Be that as it may, the “everybody” that is on broadband is not actually everybody, and anything that adds more delay to packet routing only makes the situation worse. And what happens when user A sends a request through their ISP to get an HTTP resource? How many hops does it cross? And how many of those will be running Cisco devices? (Hint: most). How many of those Cisco devices are going to do introspection on that packet to pull out the URL? How long does that take? Now consider how many HTTP requests your browser actually makes when downloading a web page. The page itself, linked CSS files, linked JS and any images (and let’s please not even consider AJAX requests).

While the idea is novel, I don’t think it is a good idea, and I would actually hope that Cisco gets the patent and sits on it and uses it merely to bludgeon anyone who actually tries to do this.

One to watch?

May 1st, 2007 by Sjan Evardsson

Sun is proposing an alternative to AJAX, called Project Flair, which is set for early release later this year. In an InfoWorld article, Sun engineer and principal investor Dan Ingalls describes it as being more like the old style of of desktop application programming (using a JavaScript programming kernel) that adds collaboration and web access.

How this actually ends up performing is anyone’s guess, but I’ll be keeping an eye out for it.

Technorati Tags: , ,

Free Source for CSS Templates

March 14th, 2007 by Sjan Evardsson

There is a great resource for free CSS Templates, at (would you believe it) freecsstemplates.org! There are some very nice uses of CSS-only rounded corners (similar to Nifty Corners Cube), some very professional designs and plenty of fun, personal-type designs for blogs and such. Definitely worth a look.

Technorati Tags:

Why the Change

January 23rd, 2007 by Sjan Evardsson

While it may seem abrupt, the switch to WordPress was by no means a quick and easy decision. Here’s a little background.

Here’s a little history:

  • I originally started on MoveableType, but couldn’t get it to run reliably in my test environment. So I figured I would go to a flat-file system.
  • Enter Blosxom: it ran very well in both my test and live environments, but I was left with a bit of a problem. I wanted to extend Blosxom and add functionality but am not well-versed enough in Perl to wrap my head around many of the available plugins. My biggest headache: getting trackback/writeback and RSS to work.
  • So I switched to PyBlosxom. Also flat-file, and very easy to move my old content from Blosxom, and with an immensely more understandable API.
  • After running PyBlosxom for a year I was still having problems with XML-RPC – I wanted to switch from my clunky PHP/TinyMCE editor for posting to using something like Performancing for Firefox (which I am using now) or Ecto. No luck. The response on the developers list was, well, listless at best.
  • When I finally got fed up with trying to make things work, and the (seeming) lack of active development, I realized that a blog that is (ostensibly) about “stuff that w0rks” should be running on “stuff that w0rks.”
  • I tried MoveableType again – still don’t like it, tried Serendipity, it didn’t feel right, and then finally broke down and tried WordPress. While the first couple days were no better than the first days on the others, it soon started to fall into place.
  • And that brings us here.

powered by performancing firefox

Disclosure of Website Vulnerabilities Illegal?

January 16th, 2007 by Sjan Evardsson

A discussion on earlier today brought up the question. It seems that Eric McCarty, a student at Purdue University in Dr. Pascal Meunier’s CS390 – Secure Computing, discovered, and reported, a flaw he found on the Physics department website. When that site was hacked two months later (most likely through a different flaw, since the one reported by McCarty was patched) law enforcement came looking for Mr. McCarty. In this particular case McCarty came forward, and was eventually cleared. However, it did change how Dr. Meunier teaches his class. He no longer recommends disclosure, but recommends that one eliminates all evidence of the discovery from their computer and say nothing.

I see this as a particularly disturbing direction in which to move.

Bookmarklet and Google Gadget for etymonline.com

December 19th, 2006 by Sjan Evardsson

I ran across the Online Etymology Dictionary the other day and was blown away by the well-designed and incredibly useful service they offer. Of course, it’s much nicer to have access to that functionality at a click, so of course I created a Firefox/Mozilla bookmarklet. But I wanted to have the same thing available on my Google homepage, right next to the Dictionary search box and the Wikipedia search box, so I created a “Google Gadget” for it as well.

To use the bookmarklet, drag the link below into your Firefox/Mozilla bookmarks bar.

Find Etymology

To use the “Google Gadget” go to your Google homepage, click on the “Add Stuff” link, click on “Add by URL” and enter http://www.evardsson.com/files/gg_etymonline.xml

Enjoy!

XHTML Friends Network

September 3rd, 2006 by Sjan Evardsson

If you haven’t yet heard of it, promises a simple way to harness XHTML rel attributes to define relationships on the web. With simple additions to urls such as rel=”friend met colleague neighbor” you could define a link as going to a site owned by someone you consider a friend, who works in the same field as you, that you have met in person, and in fact, lives close to you.

To see where all this is going, be sure to check out the XFN: What’s Out There? page, and take a look at the new XFN lookup service at . And of course, I had to add bookmarklets to make things easy to search RubHub.
Search RubHub
Search RubHub in a new window

In other news I have seen a plugin for Blosxom (the Perl kind) that checks links in stories against a tab-delimitted list of values to add XFN information to links within the story. While the simplicity of having that handled automatically is nice, I have to wonder what kind of perfomance hit that would make. I first thought about doing something like that for PyBlosxom, but I think I will look into other ways to do it, rather than to require extra pre-processing on every story display.

Speed vs. cool

April 13th, 2006 by Sjan Evardsson

I have been talking to my service provider, trying to get increased upstream bandwidth, but am shocked at the price. I find it unbelieveable that a carrier that doesn’t provide service in Anchorage can offer 768K up and down for much less than the carriers in Anchorage.

Anyhow, since it seems that upgrading my bandwidth isn’t likely to happen any time soon, I think I need to rework some elements of the site. The JS menus on the top bar and right-click currently use images for all of the elements. I will probably rework those to be text only. I would like to move away from JS menus and go to CSS menus, but they don’t work in Internet Explorer. Hmm.

So, for the short term, it looks like I need to pull some of the things that one of my co-workers describes as “way cool” and switch to something that will load faster for those who will notice it – which is anyone with a broadband connection. Of course, the provider says that 360K up is plenty, because I can keep 6 56K modems busy. I personally don’t know anyone who still connects to the internet via dial-up. I am sure there are some out there, and there are others who are on their cable or phone company’s “free” plan who get 56 or 128K cable or DSL connections, but I don’t personally know any of them either.

VeriSign-ICANN deal: much ado about nothing?

February 21st, 2006 by Sjan Evardsson

There has been a large amount of FUD generated in the last week regarding the ICANN VeriSign settlement. It seems that most that I have seen has been coming from name registrars, and notably the blog of Bob Parson (founder and president of GoDaddy).

It seems that Bob is trying to encourage people to write to their congressmen to get involved and squash the deal. I find this interesting that this post is from last Wednesday (2/15/06) while the deal was penned sometime prior to October 24, 2005. If this is such a big deal why did it take Bob so long to respond?

Most of the FUD is along the lines of an evil empire-type scheme to raise the prices for .com registration so VeriSign can fill their coffers with the money of the poor, down-trodden netizens. This is, of course, based on the pricing information in section 7.3.d which states:

Maximum Price. The Maximum Price for Registry Services subject to this Paragraph 7.3 shall be as follows:

  1. from the Effective Date through 31 December 2006, US$6.00;
  2. for each calendar year beginning with 1 January 2007, the smaller of the preceding year’s Maximum Price or the highest price charged during the preceding year, multiplied by 1.07.

Does this mean that VeriSign is going to rush out raise the price as much as possible? No. VeriSign is a solid, reputable company that has been in the market long enough to know how to set (and if need be raise) prices in a manner that will not negatively impact the market.

ICANN was never meant to be a regulator, but a coordinating body. I’m sure that Paul Twomey and Vinton Cerf knew what they were doing in setting up this deal. For a more logical look at the implications check out this article by Keith Teare from November 30, 2005, or look at the documents yourself and make your own decisions.