Skip Navigation

In an age of LLMs, is it time to reconsider human-edited web directories?

In an age of LLMs, is it time to reconsider human-edited web directories?

Back in the early-to-mid '90s, one of the main ways of finding anything on the web was to browse through a web directory.

These directories generally had a list of categories on their front page. News/Sport/Entertainment/Arts/Technology/Fashion/etc.

Each of those categories had subcategories, and sub-subcategories that you clicked through until you got to a list of websites. These lists were maintained by actual humans.

Typically, these directories also had a limited web search that would crawl through the pages of websites listed in the directory.

Lycos, Excite, and of course Yahoo all offered web directories of this sort.

(EDIT: I initially also mentioned AltaVista. It did offer a web directory by the late '90s, but this was something it tacked on much later.)

By the late '90s, the standard narrative goes, the web got too big to index websites manually.

Google promised the world its algorithms would weed out the spam automatically.

And for a time, it worked.

But then SEO and SEM became a multi-billion-dollar industry. The spambots proliferated. Google itself began promoting its own content and advertisers above search results.

And now with LLMs, the industrial-scale spamming of the web is likely to grow exponentially.

My question is, if a lot of the web is turning to crap, do we even want to search the entire web anymore?

Do we really want to search every single website on the web?

Or just those that aren't filled with LLM-generated SEO spam?

Or just those that don't feature 200 tracking scripts, and passive-aggressive privacy warnings, and paywalls, and popovers, and newsletters, and increasingly obnoxious banner ads, and dark patterns to prevent you cancelling your "free trial" subscription?

At some point, does it become more desirable to go back to search engines that only crawl pages on human-curated lists of trustworthy, quality websites?

And is it time to begin considering what a modern version of those early web directories might look like?

@degoogle #tech #google #web #internet #LLM #LLMs #enshittification #technology #search #SearchEngines #SEO #SEM

80 comments
  • @ajsadauskas @degoogle What we need to do is re-visit the GnuPG philosophy of building rings of trust. If one emerges with enough people proven to provide quality aggregators/summarizers then we can start to depend on that, or those.

  • I used them and contributed to links as well - it was quite a rush to see a contribution accepted because it felt like you were adding to the great summary of the Internet. At least until the size of the Internet made it impossible to create a user-submitted, centrally-approved index of the Net. And so that all went away.

    What seemed like a better approach was social bookmarking, like del.icio.us, where everyone added, tagged and shared bookmarks. The tagging basically crowd-sourced the categorisation and meant you could browse, search and follow links by tags or by the users. It created a folksonomy (thanks for the reminder Wikipedia) and, crucially, provided context to Web content (I think we're still talking about the Semantic Web to some degree but perhaps AI is doing this better). Then after a long series of takeovers, it all went away. The spirit lives on in Pinterest and Flipboard to some degree but as this was all about links it was getting at the raw bones of the Internet.

    I've been using Postmarks a single user social bookmarking tool but it isn't really the same as del.icio.us because part of what made it work was the easy discoverablity and sharing of other people's links. So what we need is, as I named my implementation of Postmarks, Relicious - pretty much del.icio.us but done Fediverse style so you sign up to instances with other people (possibly run on shared interests or region, so you could have a body modification instance or a German one, for example) and get bookmarking. If it works and people find it useful a FOSS Fediverse implementation would be very difficult to make go away.

    • Pinboard and TinyGem come to mind.

      • Oh indeed there are services out there that do something similar to Delicious, but I put a lot into that site only for it all to disappear due to the whims of some corporate overlord and I am not doing that again. What I am looking for is an easy Fediverse solution so my data is never lost again. Postmarks is definitely getting there but as a single-user service it isn't quite what I am looking for.

    • @Emperor @ajsadauskas I've been thinking about this myself lately - but I had wondered how a curated directory might scale, I hadn't considered federated social bookmarking and honestly that sounds like a brilliant solution. I'd love to see something like that happen, maybe even contribute

      • As the links show, Relicious/Fedilicious has been on my mind a while and I have been mourning the loss of Delicious for a long time. However, the above got me jotting down some notes.

        It should be doable. I haven't had a root through PostMark's code but it might be they have done the bulk of the work already and it just needs a multiuser interface bolting on top of it.

      • @Wren @Emperor @ajsadauskas Back in the day people's web sites had a links page and if their site was good it was always worth looking at what they listed as worthy links. I still have one but it's out of habit rather than being useful. Might rethink now tho.

    • @Emperor
      This this this! Some kind of service that would sit alongside a fedi instance and serve as a community directory.
      @ajsadauskas

      • Indeed. Places like Lemmy and Reddit might be called "link aggregators" but they are, ultimately, jumped up web forums (and that's no slight, I'm a web forum guy through and through) and are nothing like the social bookmarking sites, like Delicious, which had greater breadth and depth (just look at your own bookmarks, you'd only share a fraction on here but you put a larger percentage into social bookmarking) but, crucially, essentially crowd-sourced the organisation and categorisation of those links.

        Some kind of service that would sit alongside a fedi instance

        I have been pondering the idea of "Fediverse plug-ins" that would do that, extending the core functionality of the service.

        So in the case of, what we'll call, Fedilicious users of the service could either punt over links they post to Mastodon or Lemmy to a social bookmarking plug-in where it is stored and categorised (or you could run a not to do this automatically) but they could also add links that might not be worth a new post or storing away for future reference, etc. You would then have a curated, easily-accessible repository of links that reflect the interests of that instance.

        It needn't itself be federated but if you did, you could have some "everything" sites (fedilicious.world?) which would accepted all links from other Fedilicious instances it is federated with (which would tend to be set to broadcast mode, so categorised links go out, they don't receive all the links, although users could be allowed to add links to it from elsewhere).

    • @Emperor @ajsadauskas that's Lemmy?

      • Although Lemmy is called a link aggregator it is really just a kind of web forum and nothing like a social bookmarking service.

  • @ajsadauskas @degoogle Since I run a small directory this is a fascinating conversation to me.

    There is a place for small human edited directories along with search engines like Wiby and Searchmysite which have human review before websites are entered. Also of note: Marginalia search.

    I don't see a need for huge directories like the old Yahoo, Looksmart and ODP directories. But directories that serve a niche ignored by Google are useful.

    • @bradenslen @ajsadauskas @degoogle looksmart! There's a blast from the past.

      As a very early internet user (suburbia.org.au- look it up, and who ran it) and a database guy, what I learnt very early is that any search engine needed users who knew how to write highly selective queries to get highly specific results.

      Google - despite everything - can still be used as a useful tool - if you are a skilled user.

      I am still surprised that you are not taught how to perform critical internet searching in primary school. It is as important as the three Rs

    • But directories that serve a niche ignored by Google are useful.

      This is a good point - as search is increasingly enshittified too (from top down, with corporate interests, and bottom up, from SEO manipulation and dodgy sites) it makes sense for topics or communities often drowned out by the noise.

      I also see you are using webrings - another blast from the past that has it's uses.

    • Indeed. As I mentioned below, something like a webring (a FedRing) might be the solution to something I was pondering.

      It is increasingly clear to me that a lot of directions Web 1.0 was evolving in were diverted or just killed off by Big Tech's landgrab which built walled gardens. I see the Fediverse as a return to the idea of blogs (micro and macro), forums, etc but in a more natural progression to interoperability. This still isn't perfect and there may be other early web ideas, like webrings, that improve discoverablity.

  • @ajsadauskas @degoogle I actually contributed to one! I was a writer at LookSmart for four years; we manually created categories and added websites to then, with short descriptive reviews. Though an algorithm listed more sites below our selections, we could force the top result, eg we'd make sure the most relevant website was the first result of a search on that topic. Old-skool now, but had better results in some ways.

  • @ajsadauskas @degoogle DMOZ was once an important part of the internet, but it too suffered from abuse and manipulation for traffic.

    For many DMOZ was the entry point to the web. Whatever you were looking for, you started there.

    Google changed that, first for the better, then for the worse.

  • @ajsadauskas @degoogle

    Yes to all. For a while I've been de facto using a miniscule subset of the web. My gateway to other, relevant websites are via human-to-human recommendations, primarily in a place like this.

    • @ajsadauskas @degoogle

      And just now, as seen at the bottom of a blog post:

      "Post a Comment
      Unfortunately because of spam with embedded links (which then flag up warnings about the whole site on some browsers), I have to personally moderate all comments. As a result, your comment may not appear for some time. In addition, I cannot publish comments with links to websites because it takes too much time to check whether these sites are legitimate."

  • @ajsadauskas @degoogle I used to be one of those human editors. I was the editor of Scotland.org from about 1994 to about 1997, back in the days when it was exactly one of those hierarchical web directories – with the intention of indexing every website based in Scotland.

    • @ajsadauskas @degoogle having said that, the patents on Google's PageRank algorithm have now all expired, and a distributed, co-op operated search engine would now be possible. Yes, there would be trust issues, and you'd need to build fairly sophisticated filters to identify and exclude crap sites, but it might nevertheless be interesting and useful.

  • @ajsadauskas Back when, UW Madison hosted an outfit called The Internet Scout Project that was in the curation business for web resources. The decaying state of search (alternatively the growth of web resources intended to serve interests other than their visitors') has me thinking it would be good to work with public libraries to convene and host this sort of thing.

    Librarianship is the right sort of ethos for it, and libraries are infrastructure for human-mediated discoverability.

    @degoogle

  • What's to say we won't have AI-curated lists and directories? That way we don't have to deal with link rot and the like. I think the issue is the algorithms used for search. We need better ones, better AI, not more frivolous human labor.

  • @ajsadauskas
    I agree we need better and remember the early days well. Before indexes we passed URLs, in fact just IP addresses of servers we'd visit to see what was there, and that was often a directory of documents, papers etc. It filled us with awe, but let's not dial back that far!

    Another improvement will be #LocalLLMs both for privacy and personalised settings. Much of the garbage now is in service of keeping us searching rather than finding what we want.
    @degoogle

  • @ajsadauskas @degoogle
    I've already seen new webrings forming.

    Or maybe that was old webrings updating?

    • Yeah, I was just looking at a webring and thinking "these still have a use". They could definitely help with discoverablity on a broad front. I help Admin feddit.uk and had pondered reaching out to other British Fediverse services to make a Britiverse. However, how to hold it all together and navigate between them was proving tricky or clunky until I was looking at the webring and thought "FedRing". Now that could work.

  • @ajsadauskas @degoogle I mean we could still use all modern tools. I'm hosting a searxng manually and there is currently an ever growing block list for AI generated websites that I regularly import to keep up to date. You could also make it as allow list thing to have all websites blocked and allow websites gradually.

    • @ajsadauskas @degoogle I started that because it bothered me that you couldn't just report a website to duckduckgo that obviously was a stackoverflow crawler. This problem persists since reddit and stackoverflow are a thing themselves. why are there no measurements from search engine to get a hold of it.

      I never understood that.

  • @ajsadauskas @degoogle

    It would be sad to go back to walled gardens like AOL, particularly since they were corporate-owned. But a sort of Kite Mark, certifying a site is free of LLMs, would be useful. Then users could choose for themselves.

80 comments