Adobe’s announcement around SWF searchability was about using the power of the runtime as a replacement for the much less useful approach of parsing out static text and links from a SWF file. The parsing approach has been around for years, and if you are wondering if it was successful, ask any SEO expert what their opinion of Flash has been for the last few years.

Microsoft responded to the announcement saying their Silverlight content is searchable “too.” Their solution is the same as our old approach: search for strings in a static file. While I was expecting them to put an admirable spin on the solution that is available to them, I was not expecting them to believe that this approach is superior to runtime-enabled search.

No matter how elegant an application is, with all of its possible interactions and states it is a Rube Goldberg machine compared with the limited simplicity of plain HTML pages. Indexing the static content as Microsoft suggests is like trying to figure out what is going to happen with a Rube Goldberg machine by looking at a photo of all of the parts *before* they’ve been assembled. To really know what happens, you have to assemble the machine and let it run. That is what the version of Flash Player optimized for search engines does.

Indexing an RIA through the runtime means that you get the context of what text and links are being displayed at same point an application as well as knowing how hard or easy it was to get to that state. The real value though is that you get to do other runtime operations like load additional content.

One of the main ideas behind “web 2.0″ is data. Whether that is an RSS blog feed, a REST API query, or loading localized text, applications keep the interesting text separate from the application itself. Indexing the runtime means that you get static content, dynamic content and loaded content, which put another way, the search engine sees what the end-user sees.

While better than nothing, searching static content leaves the search engine with a disjointed view of the content because it lacks the ability to do dynamic operations like assembling a full URL from a base URL variable and a path variable. But as anyone that has done any level of SEO knows, indexing the right URL versus a close URL is the difference between customers finding your content and not.

As a final comment, I disliked Microsoft’s claim that XML makes an application inherently more searchable. For anyone writing a search engine that is parsing the file to find content, the difference between XML and an openly-documented binary tag structure is trivial. I especially dislike the claim since Silverlight 2 is moving to compiled DLLs that will actually obscure a lot of content.