I encourage making websites return jwz balls when HN retitles your stuff. I assume JWZ is ok with being a CDN for this purpose.
Hacker news adds a question mark to "The man who killed google search" to make it more "accurate" despite not having read it (or even AI summarized it I guess?)
Meanwhile, Metafilter shows me again why I love it despite itself.
The reverse engineering of the JS they're doing is particularly amusing. Like, it contains 679 embedded javascript libraries and all of their licenses, and 1 mb of every load is used to send those licenses over the wire.
It's been illuminating to watch #starlink's web interface bit rot over the past couple of years. Until last month it rotted away entirely, with the device serving up only a logo.
Amusingly some users were able to restore the old web interface, which still works because the underlying data is still being provided (and will be probably since the phone app uses the same data source).
https://github.com/iam-TJ/open-dishy/
Now when I go to dishy.starlink.com, it's running on my raspberry pi.
Voyager is a bit more V'ger from today
Literally had to go find a blurb that was like "Die Hard meets The Martian--with a dash of Knives Out" to purge that previous blurb from my mind.
"highly commericial" wtf?
editing a pdf form in firefox, what is this dark magic?
I’m thrilled to announce that our incubation handbook is now in print! A very skilled intern helped buff up the text over the winter and the result is ready to make your next hatch a major success.
(The ebook is a bit spiffier also, with the same information but more polish and a fancier cover and title.)
Here are some of the reviews of the first edition if you need more incentive to check it out:
“I have had problems with incubating chicks, getting low to no hatch, and high hatch mortality. All of the info in this book makes great sense! This helped me a lot to fix ALL of my hatch problems.” — sunnyweller
“I especially found the “helping chicks hatch” section very helpful. Followed the instructions and saved two chicks!” — Keaokun
“My first attempt at incubating was a dismal failure. I only hatched 6 of 19 eggs. Two of those had facial and beak deformities. This little ebook was so helpful and I was able to pinpoint – many – things I had done wrong.” — V. Schafer
“Awesome book, well written. Not too basic nor too much extraneous detail.” — chem girl
I’m hoping to enjoy another round of intern magic this summer, so I’d love to hear which ebook-only title you’d most like to have available in print. Or perhaps you’d prefer us to turn our newest video course into an ebook and paperback? Please comment and let me know what you want!
The post Incubation handbook now in print! first appeared on WetKnee Books.
copyright question it seems worth pondering:
If I use a false persona to get malicious code into an open source project, and along the way include some good code to cover my tracks, and I mendaciously comply with all the standard stuff needed to get my code into the project (copyright statements etc), then is that good code actually freely licensed?
mostly finished rebootstrapping from source after post-con crud
We’ve written in the past about our mushroom experiments, which mostly centered around using plug spawn in logs. So I was thrilled when our local library offered an opportunity to try something a little different — sawdust spawn.
(Yes, we do have the best library around. Yes, they did let us take home an inoculated shiitake log of our very own.)
Pros and cons of sawdust spawn
As best I can tell, the only real downside of using sawdust spawn is that you need to buy an inoculation tool. At $45 per tool, that means sawdust spawn makes the most sense for folks who intend to inoculate at least 36 logs (although you don’t have to do them all at once, of course). My math in today’s dollars:
- Sawdust spawn: about $1 per log in spawn cost
- Plug spawn: about $2.25 per log in spawn cost
In addition to long-term price savings, other benefits of using sawdust spawn include:
- Your logs will produce mushrooms faster (in 5 to 12 months instead of 9 to 18 months).
- I actually found inoculation with the sawdust tool gentler on my wrists (no hammering!).
Other inoculation innovations
Other than the inoculation tool, using sawdust spawn is pretty much the same as using plug spawn. But I thought you might enjoy seeing our teachers’ entire process since it is definitely better than ours!
First the infrastructure: They built tables with little wooden cradles at intervals to hold the logs in place. That means the only time you really need a second set of hands is when drilling the holes.
Also note the measuring stick with the spacing information on it. No laborious hand-measuring each log!
Another innovation is the use of an angle grinder rather than a drill gun. Mark shared a video in which you can see how much faster this is than what we’d done in the past.
(Do be careful though. I could see someone drilling through their hand with this setup.)
After the holes are drilled, it’s time to insert the spawn. Sawdust spawn comes in a block like the one shown above. You break it up with your hands then scoop some of the loose sawdust out into an empty yogurt container (or something similar).
Next, bang the inoculation tool into the container a few times to fill it with spawn. Place the tool over the hole and depress the button at the top to insert spawn. The goal is for the spawn to fill the hole up to about the bark level.
After that, all you need to do is wax over each spawn-filled hole. In the past, we’ve used beeswax from local hives, but apparently any food-safe wax works. Our teachers were using paraffin, melted then daubed on with cute little brushes. But they mentioned that there’s a new kind of wax, primarily used with plug spawn, that you can wipe on cold with your finger.
After that, it’s the usual waiting game (with the side note that, since we now live in an area with less extreme precipitation than we used to be located, we need to remember to water our log if we don’t get at least an inch of rain per week).
We haven’t had productive mushroom logs since moving to Ohio, but remembering how fun and easy inoculation was put the process back on my radar. Maybe next year we’ll push wildcrafting mushrooms onto the back burner and inoculate more logs.
About our teachers
I want to end with a huge thank you to Soulshine Acres for sharing their expertise with us. They’re a frequent vendor at the Athens, Ohio, farmer’s market if you want to check some of their mushrooms out. Or just follow them on instagram using the link above to learn about their forest farm, full of over 400 mushroom logs.
The post Inoculating mushroom logs with sawdust spawn first appeared on WetKnee Books.
it was also serving the front page as a 404 for the javascript linked from the front page yesterday, which is a very nice level of breakage indeed
gotta give #starlink praise where due, by removing the proprietary web frontend from their starlink terminal, they drive free software development in the space of seeing basic obstruction maps, knowing when your starlink is obstructed or the network is otherwise down, etc
Making even 404 pages the same useless logo as the front page is also a strong choice.
Already missing #distribits, hoping we do it again sometime
appimage mounts a clipboard, wtf?
This is my and @mih and Timothy Sanders's result of the #distribits hackathon, design for #gitAnnex special remotes to support storing git repositories. We improved on git-remote-datalad-annex significantly I think and I hope to implement this as part of #gitAnnex.
generation of the #distribits video archive has started, and since we're using a #gitAnnex repository it's a collaborative public process which will culminate in a redundantly mirrored archive with rich metadata.
Here the day long youtube videos are being cut into clips https://github.com/distribits/distribits-allvideos/pull/2
I woke up refreshed home at last, ran a git-annex get, checked out the clips branch, ran the cut command, and have every talk available to review.
Last sight of Dusseldorf. Great town!
Performed a ceremonial tagging of Datalad 1.0 at the conclusion of #Distribits
"an octopus merge of 40 thousand branches" -- #datalad people are wild #git
Streetcar I caught to the conference this morning.
The NYT today demonstrates they can't comprehend an xkcd cartoon.
Not that I didn't already understand that about their tech reporting.
Slides depicting a massive ecosystem with #gitAnnex somehow central to it is a new thing I'm collecting. Scientists produce great slides like this. (And other great things.)
When you write a software to manage your cat photos and it gets used for brain slicing scans to the tune of 2 petabytes brain/year. #gitAnnex #distribits
looking forward to some strolls along the Rhine now that it's finally stopped torrentially raining
AMAZING day at #distribits !
Here's my talk in the day 1 recording, "git-annex is complete, right?"
https://youtu.be/BwRy3z_hQ70?t=3412
Was also in a panel session 20 minutes after that. And there are many many other great talks in there. Can't wait until tomorrow
Drilling holes in mushroom logs just got a lot easier and faster with this new shitake drill bit.
You will also need the angle grinder drill chuck adapter.
More details from a shitake log workshop will be ready for next week.
The post Shitake log inoculation video first appeared on WetKnee Books.
new "Plans" section on https://tukaani.org/xz-backdoor/
"I plan to write an article how the backdoor got into the releases and what can be learned from this. I’m still studying the details.
xz.git needs to be gotten to a state where I’m happy to say I fully approve its contents. It’s possible that the recent commits in master will be rebased to purge the malicious files from the Git history so that people don’t download them in any form when they clone the repo. [...]"
Jia Tan's history of commits on #xz suggests that every png file in gcc or apache or whatever is a possible attack vector now.
https://joeyh.name/blog/entry/reflections_on_distrusting_xz/
Was the ssh backdoor the only goal that "Jia Tan" was pursuing with their multi-year operation against xz?
I doubt it, and if not, then every fix so far has been incomplete, because everything is still running code written by that entity.
If we assume that they had a multilayered plan, that their every action was calculated and malicious, then we have to think about the full threat surface of using xz. This quickly gets into nightmare scenarios of the "trusting trust" variety.
What if xz contains a hidden buffer overflow or other vulnerability, that can be exploited by the xz file it's decompressing? This would let the attacker target other packages, as needed.
Let's say they want to target gcc. Well, gcc contains a lot of
documentation, which includes png images. So they spend a while getting
accepted as a documentation contributor on that project, and get added to
it a png file that is specially constructed, it has additional binary data
appended that exploits the buffer overflow. And instructs xz to modify the
source code that comes later when decompressing gcc.tar.xz
.
More likely, they wouldn't bother with an actual trusting trust attack on gcc, which would be a lot of work to get right. One problem with the ssh backdoor is that well, not all servers on the internet run ssh. (Or systemd.) So webservers seem a likely target of this kind of second stage attack. Apache's docs include png files, nginx does not, but there's always scope to add improved documentation to a project.
When would such a vulnerability have been introduced? In February, "Jia Tan" wrote a new decoder for xz. This added 1000+ lines of new C code across several commits. So much code and in just the right place to insert something like this. And why take on such a significant project just two months before inserting the ssh backdoor? "Jia Tan" was already fully accepted as maintainer, and doing lots of other work, it doesn't seem to me that they needed to start this rewrite as part of their cover.
They were working closely with xz's author Lasse Collin in this, by indications exchanging patches offlist as they developed it. So Lasse Collin's commits in this time period are also worth scrutiny, because they could have been influenced by "Jia Tan". One that caught my eye comes immediately afterwards: "prepares the code for alternative C versions and inline assembly" Multiple versions and assembly mean even more places to hide such a security hole.
I stress that I have not found such a security hole, I'm only considering what the worst case possibilities are. I think we need to fully consider them in order to decide how to fully wrap up this mess.
Whether such stealthy security holes have been introduced into xz by "Jia Tan" or not, there are definitely indications that the ssh backdoor was not the end of what they had planned.
For one thing, the "test file" based system they introduced was extensible. They could have been planning to add more test files later, that backdoored xz in further ways.
And then there's the matter of the disabling of the Landlock sandbox. This
was not necessary for the ssh backdoor, because the sandbox is only used by
the xz
command, not by liblzma. So why did they potentially tip their
hand by adding that rogue "." that disables the sandbox?
A sandbox would not prevent the kind of attack I discuss above, where xz is just modifying code that it decompresses. Disabling the sandbox suggests that they were going to make xz run arbitrary code, that perhaps wrote to files it shouldn't be touching, to install a backdoor in the system.
Both deb and rpm use xz compression, and with the sandbox disabled,
whether they link with liblzma or run the xz
command, a backdoored xz can
write to any file on the system while dpkg or rpm is running and noone is
likely to notice, because that's the kind of thing a package manager does.
My impression is that all of this was well planned and they were in it for the long haul. They had no reason to stop with backdooring ssh, except for the risk of additional exposure. But they decided to take that risk, with the sandbox disabling. So they planned to do more, and every commit by "Jia Tan", and really every commit that they could have influenced needs to be distrusted.
This is why I've suggested to Debian that they revert to an earlier version of xz. That would be my advice to anyone distributing xz.
I do have a xz-unscathed
fork which I've carefully constructed to avoid all "Jia Tan" involved
commits. It feels good to not need to worry about dpkg
and tar
.
I only plan to maintain this fork minimally, eg security fixes.
Hopefully Lasse Collin will consider these possibilities and address
them in his response to the attack.
in a cafe in germany, wide awake, 16 hours of sleep seems to have beaten jetlag and accumulated xz sleep debt
I can't wait to learn about how a lot of people are using #gitAnnex tomorrow at the Distribits conference!
arrived in Dusseldorf for #distribits
ah europe, been too long.. also this is very very europe
"Selfies please" - gate agent re facial recognition. 2024
Maybe #xz was a state sponsored attack against #aprilfoolsday
Been a while since I read a news article that quoted me as a Debian developer..
Besides that innaccuracy, I think this is a pretty decent article.
special shout out to whoever in the reversing channel is using alias "Jia Tan"
closing all my social media before I go thru TSA security because it looks like Mr Robot was here
my fun little surprise today was noticing liblzma in `ldd git-annex`
Pulled in via libmagic, which on Debian is patched to link to liblzma.
git-annex can be built without that (-f-MagicMime) but it does add a nice feature.
Anyway, interesting to know that Jia Tan's code is running in my processes forever unless xz gets reverted to the 2021 version.
Some theories about Jai Tan's location based on timezones in commits, like https://rheaeve.substack.com/p/xz-backdoor-times-damned-times-and/comments are relying partly on the commits in these series that were likely `git am`ed.
That makes those theories considerably shakier, although not every timestamp mentioned in that article is in these series. #xz
To find these, used:
git log --pretty=raw | perl -e 'while (<>) { if (/^commit /) { $ps=$s;$s=$_ }; if (/^author .* (\d+) [-+]\d+$/) { $pa=$a; $a=$_; $pad=$ad; $ad=$1; } if (/^committer .* (\d+) [-+]\d+$/) { $pc=$c; $c=$_; $pcd=$cd; $cd=$1; if (defined $pcd && defined $pad && $pcd==$cd && $pad==$ad) { if ($la ne $a && $lc ne $c) { print "\n" } ; $la = $a; $lc = $c; if (! defined $ls || $ls ne $ps) { print "$ps$pa$pc"; $ls=$ps}; print "$s$a$c"; } } }'
urk old habits die hard
Checked all xz commit timestamps for similar patterns. first is a series of commits by Jia Tan on Jan 19, then another Jan 22, then Lasse has a series on Feb 9, then a long series that includes the commits mentioned above, then 3 more series by Lasse on Feb 17 and Feb 29. This certainly seems unusual.
but, I do find similar things in git.git history, Junio has a workflow that results in that legitimately
This suggests to me that xz's git workflow changed in January.
the code changes in these commits are extensive and frightening given Jia Tan's involvement imho. Full new decoder being added with plans for assembly optimisations.
a rebase would explain the common commit timestamps, but it preserves author timestamp
this seems a little suspicious, but maybe there is some other workflow that explains it
anyone know of a common #git workflow that would result in 4 commits with 2 separate authors all having one timestamp as a common commit timestamp and a second timestamp as a common author timestamp?
and apparently modifying its breakpoint detector alerts some other part of it and it changes behavior (according to discussion in a matrix channel)
how common is it for malware to have anti-breakpoint checking in it?
curious because the #xz backdoor does: https://gist.github.com/smx-smx/a6112d54777845d389bd7126d6e9f504#software-breakpoint-check-method-1
Or the easy way: Just push some plausible looking files to a extra branch that nobody looks at.
There's been some exploration and possibly locking down of such binary data as a way to guard against some SHA1 collision attacks, it's been a while since I dug into it.
While #xz has people talking about issues with binary test files etc in source repos, and issues with using tarballs that can vary from git, doing a `git clone` and building in there is *also* exposed to a huge amount of binary data.
Including binary data hidden inside #git commit objects, for example. Also git blobs are zlib compressed so might be possible to smuggle in extra binary data at the end. Possibly also at the end of tree objects, I don't remember if git checks for that.
Worth noting that some Jia Tan commits to #xz were made with the github web interface. You can tell because they are signed by a gpg key github uses for web edits (4AEE18F83AFDEB23).
The most recent one is 62efd48a825e8f439e84c85e165d8774ddc68fd2.
So if #Github keeps logs since January, they might have IP address information or other info.
Developed a fork of #xz that eliminates all code from the malicious actor, and got my system using it, including dpkg.
https://git.joeyh.name/index.cgi/xz-unscathed/
I've suggested this as a path for #debian to completely eliminate the risk of further backdoors in the xz code.
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1068024#62
Well, dpkg built not linked to a lzma library at all, instead it's using the xz command. Which is good enough for now, it seems to work and I also downgraded xz to pre Jia Tan.
Of course I installed my hacked up dpkg. Seems to work anyway.
install dpkg that I just hacked up to use a different library on my running system while running on 5 hours of sleep YN?
this is all pre Jia Tan code
Have this in a debian package now. Next I'll build dpkg against it. Whee. #xz
joey@darkstar:~/tmp>ldd /usr/bin/xz
linux-vdso.so.1 (0x00007ffc88b86000)
liblzmaunscathed.so.5 => /lib/x86_64-linux-gnu/liblzmaunscathed.so.5 (0x00007f8424805000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f8424623000)
/lib64/ld-linux-x86-64.so.2 (0x00007f842486d000)
joey@darkstar:~/tmp>xz --version
xz (XZ Utils) 5.3.2alpha
liblzma 5.3.2alpha
what a day to get up at 5 am for the third day in a row
Now I have to do it tomorrow and the next day, or jet lag will murder me on Tuesday.
finding myself hacking on a fork of #xz
According to this the #xz backdoor had additional build code that was not gated behind checks for a debian or rpm package build. Although it's not been gotten to do anything yet, the presence of that code suggests other distributions may have also been targeted.
Today is a really good time to start gpg signing every git commit you make.
Especially if you're using infrastructure with #xz on it that could still contain unknown backdoors.
I have signed all my commits since 2016.
git config commit.gpgSign 1
Here's the malicious commit that disabled the Landlock sandbox. Pretty slick!
https://git.tukaani.org/?p=xz.git;a=commitdiff;h=328c52da8a2bbb81307644efdb58db2c422d9ba7
+ # A compile check is done here because some systems have
+ # linux/landlock.h, but do not have the syscalls defined
+ # in order to actually use Linux Landlock.
That may even be true.
Lasse Collin has started making some commits to #xz, interesting starting point here.
https://git.tukaani.org/?p=xz.git;a=commitdiff;h=f9cf4c05edd14dedfe63833f8ccbe41b55823b00
doing some debian development this morning, of all things
(backporting dpkg to work with a sufficiently old #xz that there is no possible of other backdoors in it
one thing I'm sure about "Jia Tan" is that they had extensive prior experience with open source development. Everything they write in #xz commits is pitch-perfect. This is not their first rodeo.
Kind of makes you wonder what projects they contributed to while learning all that and under what names.
Or it could have been done to add cover for the actual backdoor insertation.
But "Additionally, the file contains random bytes to help test unforeseen corner cases." in Jia Tan's original commit seems pretty suspicious
Also on closer look, no need for this to be a RISC-V version of the exploit, the file could have ended up used on any system.
Noticed that Jian Tan modified several additional test files besides those used in the known #xz backdoor.
This was at the same time as the known backdoored test files, so almost certainly these RISC-V test files also contain a version of the backdoor. Used where I wonder?
clone of xz repo now available at https://git.phial.org/d6/xz-analysis-mirror
what a day to get up at 5am for the second day in a row
9 hours sleep over 2 days and I'm trying to understand a state sponsored backdoor attack in detail
put up a tarball of the clone, so all upstream branches are preserved, http://tmp.joeyh.name/xz-git-repository-for-analysis-backdoored.tar.gz
fc739b4942130e0259c272b119108ccd9241943f73b115ef5fd16299d86054d0 xz-git-repository-for-analysis-backdoored.tar.gz
I don't have all the issues and PRs unfortunately. I was just looking at the PR that added loongson support. Seems to have come from a legitimate person, he had academic publications.
Github has disabled the https://github.com/tukaani-project/xz repository
That seems a bit of a problem for everyone who needs to understand the past activity there in order to fully address the #xz backdoor. Sheesh
I have a clone from today if anyone needs it.
@cjwatson maybe worth pushing out a ssh upgrade to deal with this?
fwiw, #debian users running testing/unstable who upgrade to fix the #xz security hole, sshd does not appear to be restarted by the upgrade so you'll probably want to do that manually
Debian is considering such a reversion here. I'm glad they're taking the possibility of further backdooring seriously.
(It's not quite as easy to revert as I'd thought it would be.)
We don't need any of the changes they made to xz. xz from 2021 was fine.
They did make commits that claimed to fix an integer overflow, apparently legitimately. So they were deep into analyzing xz security at that point.
https://github.com/tukaani-project/xz/commit/18d7facd3802b55c287581405c4d49c98708c136
I count a minimum of 750 commits or contributions to xz by Jia Tan, who backdoored it.
This includes all 700 commits made after they merged a pull request in Jan 7 2023, at which point they appear to have already had direct push access, which would have also let them push commits with forged authors.
Probably a number of other commits before that point as well.
Distributions are reverting the identified backdoor. This is insufficient given this volume of activity. Revert to before any of this
would probably be worth the time for someone in #debian-devel to look at pristine-xz delta files archive-wide, to see if there are any unusually large ones that might hide such payloads
Of course this is still possibly in there...
Kind of glad that ssh access was a nice juicy target for the backdoored xz. Imagine if it had lurked until unpacking tar.xz sources and then ran arbitrary payloads embedded in the xz files. Could have allowed targeted ongoing exploitation of builds.
tired: tea.xyz encouraging people to post spam documentation patches to free software projects
wired: spamming projects with spam documentation patches to build up enough cred to take over and backdoor xz
I rag on github a whole lot, but this is one feature it has that I really like.
Since JiaT75 backdoored xz-utils, I have blocked him and now get to see a warning in every project he touched.
I hope wasmtime et all are doing some careful review..
Err, this was UPS actually. I'm so Fedex burnt that I crosswired the two.
Fedex today: "Your delivery has been rescheduled for Monday. Your package is out for delivery today. Log in before April 15th or your My Fedex account will be deleted. That is not the right My Fedex password."
(Yes it is lol I can actually retain passwords unlike you.)
up at 5 am second day in a row, I guess I'm switching to European time early before my trip on Monday
PSU is an excellent choice of venue for #FOSSY, I doubt I'll make it this year but this will be a significant improvement over last year.
I still fondly remember Debconf at PSU, having the park right there for hallway track (and the farmer's mkt!) was great.
https://social.sfconservancy.org/objects/266903a5-582b-469c-8a90-683df844c0e0
cursed TV screenshot
cursed TV screenshot
remembered that TV still exists so yes, I am watching QVC and EXPTV at the same time.
https://joeyh.name/blog/entry/the_vulture_in_the_coal_mine/ #vultr
Turns out that VPS provider Vultr's terms of service were quietly changed some time ago to give them a "perpetual, irrevocable" license to use content hosted there in any way, including modifying it and commercializing it "for purposes of providing the Services to you."
This is very similar to changes that Github made to their TOS in 2017. Since then, Github has been rebranded as "The world’s leading AI-powered developer platform". The language in their TOS now clearly lets them use content stored in Github for training AI. (Probably this is their second line of defense if the current attempt to legitimise copyright laundering via generative AI fails.)
Vultr is currently in damage control mode, accusing their concerned customers of spreading "conspiracy theories" (-- founder David Aninowsky) and updating the TOS to remove some of the problem language. Although it still allows them to "make derivative works", so could still allow their AI division to scrape VPS images for training data.
Vultr claims this was the legalese version of technical debt, that it only ever applied to posts in a forum (not supported by the actual TOS language) and basically that they and their lawyers are incompetant but not malicious.
Maybe they are indeed incompetant. But even if I give them the benefit of the doubt, I expect that many other VPS providers, especially ones targeting non-corporate customers, are watching this closely. If Vultr is not significantly harmed by customers jumping ship, if the latest TOS change is accepted as good enough, then other VPS providers will know that they can try this TOS trick too. If Vultr's AI division does well, others will wonder to what extent it is due to having all this juicy training data.
For small self-hosters, this seems like a good time to make sure you're using a VPS provider you can actually trust to not be eyeing your disk image and salivating at the thought of stripmining it for decades of emails. Probably also worth thinking about moving to bare metal hardware, perhaps hosted at home.
I wonder if this will finally make it worthwhile to mess around with VPS TPMs?
Oh, the updated TOS still allows them to "make derivative works", probably still allows AI training.
Look at it this way... the bit about them getting a license to any content in the Service is in the same paragraph where it talks about how you may not host illegal content on the Service. If this only applied to forum posts somehow (which is does not, "the Service" is clearly defined at the top as every part of Vultr), then they would not be prohibiting illegal content being stored in a VPS.
Also, they have changed the TOS already it seems. Here's the old one
http://web.archive.org/web/20240305043015/https://www.vultr.com/legal/tos/
Their claims about the license only applying to posts in their forum is clearly specious by my reading. IANAL
List of feeds:
- Anna and Mark: Waldeneffect: last checked (4610 posts)
- Anna and Mark: Wetknee: last checked (41 posts)
- Joey: last checked (224 posts)
- Joey devblog: last checked (270 posts)
- Joey short: last checked (909 posts)
- Jay: last checked (50 posts)
- Errol: last checked (53 posts)
- Maggie: last checked (8 posts)
- Tomoko: last checked (77 posts)
- Jerry: last checked (28 posts)
- Dani: last checked (23 posts)