Political discussion and ranting, premised upon the fact that even a stopped clock is right twice a day.
Showing posts with label Search Engines. Show all posts
Showing posts with label Search Engines. Show all posts
Monday, January 05, 2015
Hey, Firefox: Don't Mess With My Preferences
I know that Mozilla/Firefox ended its contract with Google and entered into a new search contract with Yahoo!, but that doesn't mean I want them to mess with my settings and make Yahoo! my default search engine.
Labels:
Browsers,
Firefox,
Google,
Search Engines,
Yahoo
Tuesday, May 01, 2012
New Spam Filter Needed: Anti Click-Baiting
Ever since a set of low-quality (yes, I'm talking about you, Arianna) and financially troubled online news enterprises decided that the best way to generate traffic was not by generating good content, but by generating catchy, often misleading, sometimes completely irrelevant headlines, the quality of news available through news aggregation sites has diminished. If news sites want to transform themselves into low-quality gossip columns, that's their business, but I hope the search engines find a way to filter for click-baiting headlines that, truly, are just another form of spam.
Tuesday, March 13, 2012
If You Think Google Takes Advantage Of You....
You can always block Googlebot from your site.
It's a fair response that a copyright holder shouldn't have to insert code into its content saying, "Please don't index this," but only to a point. If the companies at issue weren't already profiting from the traffic generated by Google, whether in money or prestige, they are all sophisticated enough to exclude their content from Google's sites and simply do without the web traffic.
In a move aimed at helping newspapers generate new revenue from struggling online operations, the German government intends to require search engines and other Internet companies to pay publishers whose content they highlight....The proposed policy is directed at all search engines and aggregator sites, not just Google, but it's really the bottom line of the big players that has some publisher openly salivating.
The proposal was cheered by German publishers, who complain that Internet companies like Google have profited hugely from their content, while generating only scraps of digital revenue.
“In the digital age, such a right is essential to protect the joint efforts of journalists and publishers,” the Federation of German Newspaper Publishers said, adding that it was “an essential measure for the maintenance of an independent, privately financed news media.”
It's a fair response that a copyright holder shouldn't have to insert code into its content saying, "Please don't index this," but only to a point. If the companies at issue weren't already profiting from the traffic generated by Google, whether in money or prestige, they are all sophisticated enough to exclude their content from Google's sites and simply do without the web traffic.
Labels:
Copyright,
Germany,
Google,
Newspapers,
Search Engines
Thursday, June 23, 2011
Would You Buy a Domain on a New TLD
If you purchase a new TLD (top-level domain) from ICANN, you have the potential to make a lot of money. You also have the potential to lose a lot of money.
Particularly during the early years of the Internet there were a number of efforts to create alternatives to domain names, usually by allowing people to buy keywords and key phrases that would cause people using the sponsor's service, or perhaps a browser plugin, to navigate to the registrant's (ostensibly relevant) website. None ever acquired the volume of users necessary to succeed.
Speaking of fees, if you want one of the new domain suffixes and are not a wealthy individual or company, get ready to put a major dent in your bank balance. The Icann application alone will be $185,000, with an annual fee of $25,000. Who sets this fee? Why, Icann, of course. Is it reasonable? Icann says it is. Why is it reasonable? Because Icann says, based on evidence that is less than persuasive, that it needs the money for things like legal costs.So if you're not able to buy example.com, would you buy example.example? Domain owners have on occasion experienced difficulties when existing registrars have gone out of business, but they've largely been able to get past the problem by virtue of the fact that the TLD's are not owned by the registrar. If you pay somebody $6 for your example.example domain, though, and they decide they're not making enough money to justify the continued registration of the TLD, your domain name vanishes - and even if it's worth it to you to try to acquire the TLD I'm expecting that ICANN wouldn't only want you to increase your annual registration cost by $25,000 to keep your domain going, it would want a new application fee - and would reserve the right to reject your application.
Particularly during the early years of the Internet there were a number of efforts to create alternatives to domain names, usually by allowing people to buy keywords and key phrases that would cause people using the sponsor's service, or perhaps a browser plugin, to navigate to the registrant's (ostensibly relevant) website. None ever acquired the volume of users necessary to succeed.
A partial bypass already exists for end users. It's called Google – though this also applies to Bing and other search engines. Internet users are learning that it's easier, almost always with better results, to type the name of the enterprise they're searching for into the browser's search bar than to guess at a domain name and type that guess into the address bar. Google isn't the DNS, but its method suggests new approaches. To that end, some technologists have suggested creating a DNS overlay, operated in a peer-to-peer way that incorporates modern search techniques and other tools. Making this workable and secure would be far from trivial, but it's worth the effortIf you can come up with a viable, superior alternative to the URL and get a sufficient volume of users to adopt to your system, you will become very rich.
Monday, January 10, 2011
Search Engine Sex?
Paul Krugman writes,
Wouldn't this question have best been asked in the era of Excite and HotBot?
----------
1. You want search engine sex from Google? Go to settings and, under SafeSearch Filtering, choose "Do not filter my search results". Your next search result, regardless of topic, may inspire you to say, "Yech."
I’m not quite sure what search-engine sex would involve.Perhaps I should leave this with "Don't Ask"?1 (Or should I go retro and suggest that you "Ask Jeeves"?)
Wouldn't this question have best been asked in the era of Excite and HotBot?
----------
1. You want search engine sex from Google? Go to settings and, under SafeSearch Filtering, choose "Do not filter my search results". Your next search result, regardless of topic, may inspire you to say, "Yech."
Monday, January 03, 2011
Google vs. the Spammers
In an article entitled "Why We Desperately Need a New (and Better) Google", a professor describes the experiences of his students, trying to get past the "tropical paradise for spammers and marketers" in Google:
The author implicitly recognizes the value and importance of paid content, having secured premium access to LinkedIn for his students. It's interesting to me that it was when the paid service failed -- "some of the [company] founders [students were to contact] didn’t have LinkedIn accounts" - that his students turned to Google. And the complaint is not that the paid service was inadequate, but that the free service didn't quickly and easily turn up the information that was not available for a fee through the paid service.
The author mentions a small search engine, Blekko, that he depicts as overcoming some of the problems faced by Google:
If Blekko "hits it big" it will be swamped with spammers, as was one of the co-founder's first projects, the Open Directory Project. Also, while I don't want to diminish the idea of getting rich based upon leveraging the free labor of the crowd, as the Open Directory again suggests, at a certain level depending on free volunteer labor doesn't scale very well. If you've already sold your project and have moved onto your next one, that's no big deal. If you're trying to continue a project that relies upon volunteers to sort through and meaningfully categorize billions of web pages, it's critical.
Further, it's not just the creation of slashtags that creates opportunity for spammers. We're told that while Google tends to attach a date to a page based upon when it first finds the page, an oversimplification but seemingly true in many cases, Blekko determines when content was created by "analyzing other information embedded in its HTML". There's an obvious reason, though, why Google has not chosen that approach - I can easily set up my server to embed in my HTML the message, "This is a fresh, new page". Spammers don't care right now because very few people use Blekko and even fewer of them are top targets for spammers. But if Blekko catches on, that approach to dating content will be quickly exploited and rendered completely useless. As the author said, "unscrupulous companies... know how to manipulate Google’s page-ranking systems" - the same is true for any popular search engine. It's not clear that Blekko would survive any degree of popularity. But then, I expect that Blekko's goal is to find a company willing to buy its technology, not to actually become the next big search engine.
Guess what else? In my non-scientific search for "appliance reviews" on Google and Blekko, I found Google's results to be better. Blekko offered more machine-generated compilations of user reviews, while (amidst similar compilations) Google came up with Good Housekeeping and Consumer Reports. A leading reason both searches included a lot of junk is, as previously mentioned, there's not a whole lot out there for free that's not junk. My suggestion? Find a popular forum with members who are familiar with the products and product lines that interest you, post a question about the products you are interested in buying, and see what the other members have to say.
Update: Speaking of things that make you go 'blekko'....
Almost every search takes you to websites that want you to click on links that make them money, or to sponsored sites that make Google money. There’s no way to do a meaningful chronological search.He notes,
The problem is that content on the internet is growing exponentially and the vast majority of this content is spam. This is created by unscrupulous companies that know how to manipulate Google’s page-ranking systems to get their websites listed at the top of your search results.... This is exactly what blogger Paul Kedrosky found when trying to buy a dishwasher.... He couldn’t make head or tail of the results. Paul concluded that the “the entire web is spam when it comes to major appliance reviews”.Pretty close. Some websites offer reviews of appliances that are genuine, albeit usually created by customers and thus sometimes unreliable or posted to the wrong make or model. You can sometimes find a decent, free resource such as Greener Sources from Consumer Reports. But most professionally conducted reviews for major appliances are not available for free, or are only available in highlight form as detailed in press releases by the authors or in news stories that discuss the findings. Meanwhile, as the article notes, even the "big boys", one-time ethical players in the "new media" market, are mass producing junk articles:
Content creation is big business, and there are big players involved. For example, Associated Content, which produces 10,000 new articles per month, was purchased by Yahoo! for $100 million, in 2010. Demand Media has 8,000 writers who produce 180,000 new articles each month. It generated more than $200 million in revenue in 2009 and planning an initial public offering valued at about $1.5 billion. This content is what ends up as the landfill in the garbage websites that you find all over the web. And these are the first links that show up in your Google search results.But that's not necessarily Google's fault - nor is it necessarily the result of a fault in Google. Let's imagine we're a friendly neighborhood search engine spider looking for reviews of major appliances. We find a site that is known for offering quality reviews. Oops - the content is behind a firewall, for subscribers only, and we can't see it. So we keep looking, and find an "off the top of my head" article from Associated Content that at least offers a few tips on selecting an appliance. And we find a store site that has some consumer reviews for the appliance. And we find some low-quality sites that are primarily designed to push affiliate links. Then, finally, we find some machine-generated sites that either aggregate content from other sites or present garbled but keyword-rich test accompanied by ads. Google actually does a pretty decent job in ranking the pages it can see - the real problem is that for the most part the content the reader most wants, the costly and labor-intensive testing and comparison of major appliances across a number of relevant variables - simply isn't available for free.
The author implicitly recognizes the value and importance of paid content, having secured premium access to LinkedIn for his students. It's interesting to me that it was when the paid service failed -- "some of the [company] founders [students were to contact] didn’t have LinkedIn accounts" - that his students turned to Google. And the complaint is not that the paid service was inadequate, but that the free service didn't quickly and easily turn up the information that was not available for a fee through the paid service.
The author mentions a small search engine, Blekko, that he depicts as overcoming some of the problems faced by Google:
In addition to providing regular search capabilities like Google’s, Blekko allows you to define what it calls “slashtags” and filter the information you retrieve according to your own criteria. Slashtags are mostly human-curated sets of websites built around a specific topic, such as health, finance, sports, tech, and colleges. So if you are looking for information about swine flu, you can add “/health” to your query and search only the top 70 or so relevant health sites rather than tens of thousands spam sites. Blekko crowdsources the editorial judgment for what should and should not be in a slashtag, as Wikipedia does. One Blekko user created a slashtag for 2100 college websites. So anyone can do a targeted search for all the schools offering courses in molecular biology, for example. Most searches are like this—they can be restricted to a few thousand relevant sites. The results become much more relevant and trustworthy when you can filter out all the garbage.Except who crowdsources the crowdsourcers? Also, you can presently build a custom search engine with Google that does pretty much the same thing. Blekko really seems to be identifying a way to switch by slashtag between custom search engines. Through its primary interface, Google is attempting to give you the results without the slashtags, and it seems unlikely that a search engine that requires users to learn slashtags in order to get meaningful results will ever be more than a niche player.
If Blekko "hits it big" it will be swamped with spammers, as was one of the co-founder's first projects, the Open Directory Project. Also, while I don't want to diminish the idea of getting rich based upon leveraging the free labor of the crowd, as the Open Directory again suggests, at a certain level depending on free volunteer labor doesn't scale very well. If you've already sold your project and have moved onto your next one, that's no big deal. If you're trying to continue a project that relies upon volunteers to sort through and meaningfully categorize billions of web pages, it's critical.
Further, it's not just the creation of slashtags that creates opportunity for spammers. We're told that while Google tends to attach a date to a page based upon when it first finds the page, an oversimplification but seemingly true in many cases, Blekko determines when content was created by "analyzing other information embedded in its HTML". There's an obvious reason, though, why Google has not chosen that approach - I can easily set up my server to embed in my HTML the message, "This is a fresh, new page". Spammers don't care right now because very few people use Blekko and even fewer of them are top targets for spammers. But if Blekko catches on, that approach to dating content will be quickly exploited and rendered completely useless. As the author said, "unscrupulous companies... know how to manipulate Google’s page-ranking systems" - the same is true for any popular search engine. It's not clear that Blekko would survive any degree of popularity. But then, I expect that Blekko's goal is to find a company willing to buy its technology, not to actually become the next big search engine.
Guess what else? In my non-scientific search for "appliance reviews" on Google and Blekko, I found Google's results to be better. Blekko offered more machine-generated compilations of user reviews, while (amidst similar compilations) Google came up with Good Housekeeping and Consumer Reports. A leading reason both searches included a lot of junk is, as previously mentioned, there's not a whole lot out there for free that's not junk. My suggestion? Find a popular forum with members who are familiar with the products and product lines that interest you, post a question about the products you are interested in buying, and see what the other members have to say.
Update: Speaking of things that make you go 'blekko'....
Labels:
Blekko,
Google,
Search Engines,
Spammers
Wednesday, September 08, 2010
My Day Just Got Longer?
I can now save eleven hours every second! Oh... that's a collective eleven hours? I knew there had to be a catch.
Seriously, when I ran a Google search earlier today and saw the results change in real time when I typed, I had a, "What the... Oh, another Google experiment with Ajax" moment. I expect I'll find it useful over time. And I expect Google will encounter resistance from those who liked things the old way and will interpret anything Google says as, "It's not a bug - it's a feature".
I suspect this explains why I now sometimes get a warning when trying to close a Google SERP.
Seriously, when I ran a Google search earlier today and saw the results change in real time when I typed, I had a, "What the... Oh, another Google experiment with Ajax" moment. I expect I'll find it useful over time. And I expect Google will encounter resistance from those who liked things the old way and will interpret anything Google says as, "It's not a bug - it's a feature".
I suspect this explains why I now sometimes get a warning when trying to close a Google SERP.
Labels:
Google,
Search Engines,
Technology
Tuesday, December 29, 2009
How Did This Get Published?
When you see a really silly editorial published in the New York Times you think... well, at least it's not the Post. But sometimes you really have to wonder about the agenda. By way of example, a guy who runs an insignificant company in England was given space to rant and rave about how his company is pretty much invisible to Google, speculate that it's because of a "penalty", and present absolute claptrap about how Google has no business advocating for network neutrality if his company can't outrank superior, vastly more popular websites.
Today, search engines like Google, Yahoo and Microsoft’s new Bing have become the Internet’s gatekeepers, and the crucial role they play in directing users to Web sites means they are now as essential a component of its infrastructure as the physical network itself. The F.C.C. needs to look beyond network neutrality and include “search neutrality”: the principle that search engines should have no editorial policies other than that their results be comprehensive, impartial and based solely on relevance.The author's missive is directed at Google, not at Yahoo!, which seems odd given that Google is much more algorithm-driven than Yahoo!, is disinclined to "hand edit" even embarrassing search results (as compared to Yahoo!'s hand-editing, including to self-promote.
Moreover, in this quest for "neutrality", the author fails to specify what that concept means or how it could be measured. For example, one factor Google considers is whether people link to a site or its internal pages. If nobody's linking to the author's site, it's quite likely that the site isn't worthy of links - or that there are superior alternatives that get the links. Another big factor is unique content - sites like the author's, that rely almost exclusively on product feeds for their content, have virtually none. The author's site invites ratings, but I see no evidence that anybody has ever added a rating - theoretical unique content is not unique content. What do you get if you browse the site? Product listings, vendor information about the products, and affiliate links. Hardly a paradise for the consumer. And an overall site design and architecture that's not particularly search-engine friendly, relying heavily on iframes.
Reading the New York Times editorial you might be confused, and think that this is a big company that has invested in a serious innovation, yet cannot break through Google's iron wall. Hardly. This is a website founded by the author and his wife, with programming assistance from a family friend. It's the type of site a skilled programmer could knock off in an afternoon. It is no surprise to me that the site felt "punished" by Google, as a few years ago Google modified its algorithm to diminish the presence of sites just like the author's - sites that have essentially nothing to offer to the consumer beyond recycled information and affiliate links. Frankly, I personally think that too many site's like the author's still show up in Google's search results - and remain far too prevalent in Yahoo! Search and Bing. Unless and until it started to offer some real value, my ideal search engine wouldn't just relegate his site to the far reaches of the search results. My ideal search engine would exclude his site altogether.
Here's something I find odd. The author has been whining about problems with Google for years. The site was featured in a Guardian article several months ago. For all of the energy the author puts into complaining, it doesn't appear that he's expended any effort into improving his own website. Instead he whines in response to the suggestion that his site needs to present unique content that his site's replication of information easily found on other sources "is, in essence, all that Google itself does". Cute. Except Google tries to drive consumers to sites that match their search needs, rather than confining them to a set of affiliate merchants. And, unlike the author's company, Google does it well. And of course the author is begging the question - the issue is not that Google doesn't incorporate sites like his into the search results. The issue is that when there are tens, hundreds, thousands or tens of thousands of sites with the same content, those that offer nothing more than replication of the third party content deserve to rank at the bottom.
Further, the author claims to be in the same business as Google - the search business - and then complains that Google favors its own shopping search engine results over his. With all due respect, isn't that what you would expect a business competitor to do? But really, his site isn't a competitor with Google. People go to Google looking for information on products, and he hopes that they will come to his site from Google, follow an affiliate link, and make him a commission. Few people come to his site first, and none of them are directed to Google. From a search engine standpoint, he's an unnecessary middle step between the person searching for a product and the desired product - cut him out of the middle and the search engine user experience improves.
But don't just take my word for it. After the Guardian ran its story, there was a significant upward spike in traffic to the author's website. Even coming into the Christmas buying season, that spike didn't translate into any subsequent increase in traffic. That is, for the most part people seem to have taken a look and asked themselves, "That's all they have to offer?", then forgotten about the site.
The author also blames Google for a loss of traffic to MapQuest. I used MapQuest, often through its partnership with Yahoo!, quite regularly before Google Maps came out with its innovative AJAX interface and blew MapQuest's socks off. Google maps was a vastly superior product. Yahoo! doesn't even partner with MapQuest any more - is that Google's fault as well? Now it may hurt to be running a company and have a competitor produce a far superior product and take away your market share, but that's the way the markets are supposed to work. The same goes for the author's complaint that the share price of TomTom has dropped now that Google is making available a free turn-by-turn navigation service - what duty does Google owe TomTom's shareholders? Should companies be forbidden from giving away a service if a (sort-of) competitor would prefer to charge for a similar service? Should TV Guide be allowed to forbid cable companies from providing their customers with on-screen channel guides?
The author also whines that Google acquires technology from other companies. So what? Mergers, acquisitions, and the purchase and licensing of intellectual property play a big role in how companies operate.
Beyond that, the author's back to complaining that Google's cutting out the middleman - promoting its own search products through Universal Search instead of directing people to a third party website that it does not control in the hope that the third party will responsibly direct the consumer to an appropriate destination. Don't get me wrong here - Google's ability to leverage its way into new markets is a subject for valid concern - but it's not surprising that they favor their own sites when they're striving to provide a consistent, quality user experience.
So how did the story cross the pond, with no mention by the Times that the author is writing about a tiny, U.K.-based website? In my opinion, because the story was picked up by the industry groups that oppose network neutrality, and who hope that this type of idiocy can cloud the picture. We even get a silly parallel term, "search neutrality".
Google was quick to recognize the threat to openness and innovation posed by the market power of Internet service providers, and has long been a leading proponent of net neutrality. But it now faces a difficult choice. Will it embrace search neutrality as the logical extension to net neutrality that truly protects equal access to the Internet? Or will it try to argue that discriminatory market power is somehow dangerous in the hands of a cable or telecommunications company but harmless in the hands of an overwhelmingly dominant search engine?
There is absolutely no parallel between network neutrality and this nebulous concept of "search neutrality". None. You will not be able to get two people in a room to agree as to how a given set of websites should be comparatively ranked, let alone "all of them." Search engines have good cause to keep the details of their algorithms secret - it prevents people from gaming the system and spamming search results. For all the complaints about "how more transparency would be nice," this article from 2002 still does a pretty good job of what you need to do to make your site succeed. Sure, it's easier to take the author's approach and spin up a program that creates a website automatically from content created by others, but no secret here: it's rare at best for that approach to bring about long-term success.
More than that, in his zeal to attack Google, the author flips the idea of network neutrality on its head. Without network neutrality, the companies that offer Internet bandwidth can charge people at either end of a digital "transaction" for the privilege of sending their data over its wires - and without payment they could slow the transmission of the data down to a crawl or cut it off entirely. (Of course they could - and would - allow their own competing products across their networks, unimpeded.) You could pay more as a consumer for broader bandwidth, but perhaps still have your service provider narrow or cut off the bandwidth to sites or services you want to use. The service provider could also pay a fee for increased bandwidth.
Nobody's going to pay an extra penny to get access to the author's website, and he doesn't have the money to pay in their stead. Network neutrality will cost Google money, as it pays for the bandwidth - but they have the money with which to pay. And that will serve to cement their position (and that of companies like Microsoft) at the top of the heap, while a new, prospective competitor to Google - the tiny company that grows based on word of mouth, just as Google did as it quickly gained acclaim and displaced former search leader AltaVista, is unlikely to even have a chance.
So will the New York Times tell us, what lobbyist or telecom industry insider convinced them to run the editorial? (I'm not holding my breath.)
Saturday, September 26, 2009
Wow, Google's Fast....
Not so many years back you would put content up on the web, wait for evidence that a search engine had spidered your site, then wait a few months for it to start showing up in search engine results. Google put an end to all that, and has brought us something close to instant gratification....
That post I put up earlier today (on Legal Media Relations) sorta poking fun at a web PR guy and his "authoritarian" web presence? It's presently showing up for "legal media relations" in Google - in the number three spot, behind that guy's principal sites. You heard it here first, folks: The stopped clock is now displayed in five or more first--page, top-10 results (although I'm still not sure what that means).
The lesson of the day: Don't suggest that there's skill involved in ranking for a search phrase for which you have no competition.
Update (Sept. 29): Easy come, easy go. There's apparently still a "freshbot" phenomenon in Google, which makes sense, giving a temporary boost to the newest content its spiders find. I still rank for the phrase "legal media relations", but now on page two of the SERPs.
Update (Oct. 4): Easy go, easy come? I'm back in spot #3 for "legal media relations" in Google.
Labels:
Google,
Marketing,
Search Engines
Wednesday, March 04, 2009
The Ultimate Measure of Everything
Apparently, Google's search suggestions....
For now, though, the banks still threaten to consume the Obama presidency. Indeed, I’m sorry to report that if you just type two letters into Google — “b-a” — the first thing that comes up is not Barack Obama. It’s “Bank of America.” Barack Obama is third.And if you type in "Thomas", Thomas Cook is first - lots of Brits want to explore the flatness of the Earth, I guess.... Thomas the Tank Engine is third. Thomas Friedman? Well below the flat case goods of Thomasville Furniture and the flat canvases of Thomas Kincade, sandwiched at eighth place between Thomas Pink and Thomas Jane.
For what it's worth.
Which... honestly... isn't much more than Friedman's other observation that when you type in the letters "MERE", Meredith Whitney beats out the next suggestion for the top spot.... merengue. A foodstuff almost as fluffy as Friedman's analysis.
Tuesday, July 01, 2008
Google Suggests....
I've commented on Google's automated search suggestions before, but this one is funnier (and, well, a bit more risqué) - "what to do if the inside of a grill gets wet".
Labels:
Google,
Humor,
Search Engines
Thursday, January 04, 2007
Why Search Engines Annoy Me
Don't get me wrong - I use search engines a lot, and find them invaluable for navigating the Web. But for finding new sites whose owners don't have the resources to launch an advertising or public relations campaign? They're not so good any more.
An oversimplified history: It wasn't so long ago that if you put up a website and worked to develop it, it could become an authority site. People relied heavily on directories to locate sites, listings were free, and search engines focused to a significant degree on page content when determining relevance. Then came the rise of "pay for inclusion" coupled with Google. Directories became less and less important to traffic, while search engines became dominant. Initially this didn't make much difference, as Google's big advance over other search engines was its analysis of linking between sites and pages. That type of weighting was emulated by other sites. But problems developed for the smaller webmaster.
First, as search engines became dominant, the proliferation of "links pages" and small directories that used to be found all over the Internet started to diminish, and their owners became less interested in maintaining pages of links, making it harder to develop "natural" links. Second, many of what were once common mechanisms for building links (e.g., trading links with another webmaster) are automatically suspect and likely to be devalued by search engines, as they are so easily abused. From a webmaster's perspective, the best links are one-way links from topically relevant pages of popular sites.
For a new site, no matter how good, the owner usually faces the conundrum that the site lacks sufficient links to rise to the top of search engine results pages, and as nobody is finding the site through search engines it isn't developing "natural" links... and thus languishes in obscurity. The response you often hear from representatives of the search engines is that quality sites will develop links over time - but from what I have seen, that's usually not the case. When it is the case it is often because the site has invested in a link development service, which charges an hourly or "per link obtained" fee to obtain one-way links to the new website.
Oh, how painful it can be watching a lousy website float at the top of the SERPs, like a turd in a septic tank, just because "they got there first".
Blogging to some extent revived natural linking - bloggers link to other weblogs that they like, usually whether or not they get a link back. Many early weblogs gained lots of links, thus being classified as "important" by search engines. Newer bloggers face the problem that many established bloggers neglect or simply choose not to update or expand their "blogrolls", and from the fact that no matter how good their content there are now usually other blogs which address the same topic reasonably well, making it harder for them to stand out. Also, blog searches have to a significant degree been shifted from regular search engine results to dedicated blog search interfaces, making it less likely that a new blogger will be found through a standard search engine.
Until the next big thing comes along, and you can again get in on the ground floor, the status quo makes it difficult to develop traffic to a new site. The work-around, perhaps, is to find an on-topic site with a reasonable URL and to try to acquire it from its current owner. (Anticipate, though, that the current owner will believe that a site that hasn't been updated in five years, and which looks like it was designed by a color-blind eight-year-old, will insist that the site is worth a small fortune.) But given the cost of generating similar traffic through advertising, on may commercial subjects they may be right.
Labels:
Search Engines
Sunday, March 26, 2006
Blaming Google
In an article that tries to blame Google for the decline of literacy in society, Edward Tenner writes:
Many students seem to lack the skills to structure their searches so they can find useful information quickly. In 2002, graduate students at Tel Aviv University were asked to find on the Web, with no time limit, a picture of the Mona Lisa; the complete text of either "Robinson Crusoe" or "David Copperfield"; and a recipe for apple pie accompanied by a photograph. Only 15 percent succeeded at all three assignments.Actually, Google makes it easy.
Today, Google may have expedited such tasks, but the malaise remains.
The Mona Lisa.
Robinson Crusoe and David Copperfield.
Apple Pie.
More owners of free high-quality content should learn the tradecraft of tweaking their sites to improve search engine rankings.Spoken like somebody who knows nothing about SEO (Search Engine Optimization), or how hard it can be to "rank" for a commercially competitive keyword.
Labels:
Google,
Search Engines
Thursday, January 19, 2006
What Would the Bush Administration Make of This?
Can typos get you somehow tangled up in an absurd fishing expedition?
I was looking up the Texas Code on Google, but was a bit clumsy and ended up searching for "Texas cod". Google helpfully asked, "Did you mean: texas coed"?
Labels:
Google,
Privacy,
Search Engines
Thursday, November 03, 2005
The Future of Search
Granted, this online search involved some paid services, but it provides some interesting glimpses into the future of search and privacy.
Using nothing more than a swab of saliva and the internet, a 15-year-old boy has tracked down his anonymous sperm donor father, according to details released today.The article observes,
* * *
The boy took the saliva sample late last year and sent it off to an online genealogy DNA-testing service called FamilyTreeDNA.com. For a fee of $289 (£163) the boy had his genetic code available for other members of the site to search. Although the boy's genetic father had never supplied his DNA to the site, after nine months the boy was contacted by two men who were on the database and whose Y-chromosome matched his own. The two men did not know each other, but shared a surname, albeit with a different spelling, and the genetic similarity of their Y-chromosomes suggested there was a 50% chance that the two men and the boy shared the same father, grandfather or great-grandfather.
he boy's ability to use publicly available genetic tests and internet searches suggests that police forces could do the same and obtain the surnames of potential suspects with DNA samples gathered from crime scenes.Absolutely. Rather than comparing DNA to samples from individuals in a state database, the Y-chromosome from any given male suspect could be compared against the growing genealogical database to potentially identify possible suspects (or their families) even when they aren't in the state system.
Labels:
Search Engines
Subscribe to:
Posts (Atom)