Live at: https://zenmumbler.net/dtbb
This is a website concept I made to quickly search through the many entries submitted for the Ludum Dare game jams. Neither the old nor the new official LD sites are particularly good at searching through the games.
Site features:
Try it now!
In the following, substitute pnpm with the package manager you use, I use pnpm.
pnpm installpnpm run dev
This will also start a local server with autoreloadpnpm run buildNOTE WELL: the full processed data for all supported events is already present in the
site/data/ldXY_entries.json files. All of the spidered files (except for thumbnails) are
also present, though zipped, in import/spider_data/entry_pages. Unzip these to use them
in the import extraction process. Only mess with the import stuff if you find it interesting
for some reason.
In the import folder run node import to get a list of commands available, right now they
are listing, entries, thumbs and extract. Each of these commands takes 1 or 2 numbers
as parameters, they are the starting and ending indexes of LD event numbers ("issues") to
process.
listing 15 gets the entry listing for LD 15.
entries 20 25 downloads the entry pages for LDs 20 through 25 inclusive.
etc.
entries and thumbs require the data downloaded by listing and extract requires the
entry pages downloaded by entries. So to download and process all the data you'd do
something like:
node import listing 15 38
node import entries 15 38
node import extract 15 38
node import thumbs 15 38 (optional)
Note that each of these operations will take quite some time. The scraping happens sequentially, both for simplicity reasons and not to hammer the LD site too much and a full extract of all ~35k entries will take around 20-30 minutes.
LDs before #15 did not have a structured submission system in place and are not supported. The importer supports, for the most part, importing events on the new ldjam.com site (#38 and newer). The main thing disabled is platform detection, which yielded too many empties/false positives on the data from the new site.
The site is a client-only web app, there is no server component. It is hosted as an S3 static website. The S3 hosted site is then powered by Cloudflare which handles caching, asset compression, minification and other fun stuff. This has the advantage of a very low cost for me (think cents per month) as I don't have to pay for web hosting or EC2 instances and it forced me to be creative running everything locally.
So while this project started out mainly to address my frustration with the aging LD website it changed into a project where I could explore and practice with several web (dev) features that I had not done much with. So if things are a bit more complex than they need to be for an app this small, then that's why. To whit, I've made/done the following:
The data in the live site was scraped from the old and new Ludum Dare websites. DTBB has a full copy of all thumbnails and catalog data hosted on S3.
The platform categorisation of entries is based on their download links and titles. I tried to be reasonably smart but there may be false positives.
Neither this project nor I are affiliated with or endorsed by Ludum Dare staff. I do not own or claim to own the data extracted from the LD site. In fact, if you want to make something cool yourself, use the ldXY-entries.json files in the site/data dir and have a go.
Now go and make, play and rate games.