Okay I got too into all this and it has become 2:30am whoops
Made pretty good progress though. I had a big list of todos and problems and questions from last time & I've got through all of them except for some PHP extensions I still need to install
Then if that all works I can try it on the actual server.....
was looking into how much free space i have on my VPS before i do all this & found that i'm using like half the 40GB available
and of that like 4GB was /var/log/journal so i configured it to use max 100MB because i never checked those logs in my life?
another 4GB ibdata1 in mysql which .......apparently is mostly accounted for by my notes wiki db WHY there's barely shit in there
I Think what happened was the first script purged the references to the revisions in the "archive" table, meanwhile the second script is looking for references in the "content" table which was not actually touched by the first script
so uhhhhh I guess I can try and identify orphaned content entries myself and delete them and then run the second script and hope I don't fuck everything up
How did I even get here this has nothing to fucking do with Docker
OK so what I figured out is that revisions are linked to content by the "slots" table, and the "slots" table suspiciously has 2888 entries referencing nonexistent revision ids, which is the exact same number of archived revisions that were purged
So I THINK I should be safe to delete those slots and Then delete the content entries corresponding to those slots and Then delete the text entries corresponding to the content entries
Sorry for posting through this btw
well my internet went down but i'm tethering off my phone to fucking finish this
I deleted all those orphaned slots/content entries and ran the script. the script actually found the orphaned text this time but threw an error when trying to delete it. so I ran another sql query to delete the text
THAT failed cos it ran out of space while trying to write a temp file so I'm now doing them 100 at a time
We did it gamers. Deleted all the spam text. And how much space has that gained me? fuck all!!!!! because InnoDB will never give you space back once it's used it, if you have the "file per table" setting turned off (which is the default) it just dumps everything into that fucking ibdata file which can never get any smaller. The only thing you can do from here apparently is to dump all the DBs, wipe the whole thing and reimport them. I'm going to fucking bed
Realised why that was happening, the table dump actually had TABLESPACE `innodb_system` in it, which caused the reimported table to always go in the massive ibdata file. Have converted everything to file per table properly now. But that was still kind of a futile exercise considering theres still no way to shrink ibdata without dumping, deleting and recreating all the databases. And I'm just not gonna do that until I fucking need to because I have plenty of space for docker atm
so...i got docker installed on the server... got my container up and running... web server pointed at it... all that worked fine
the new and cool problem i now face is that it's unable to connect to the mysql server on the host, despite this working fine for me locally. it doesn't even fail in a useful way either, it just times out. if i ping the hostname that's supposed to map to the host's internal network, it responds fine. but doesn't respond for mysql
going to fucking bed again
I still can't resolve this mysql issue though. So like for background I have extra_hosts: - "host.docker.internal:host-gateway"
configured in my docker-compose file & that's supposed to map the host machine's internal network to the hostname docker.internal.host inside the container
And that works, for some things, e.g. I can ping it, I can connect to its web server port. But not for mysql! It just times out! I don't know how to debug this!!!!
so I got the bastard working!!!!!!!!
I think my main misconception was that the docker internal host thing was like, magically mapping things to appear to connect from localhost on the host so I wouldn't have to do anything special to allow them to connect
but that's not the case, the container connects from its own IP address so I had to allow that through the firewall & allow mysql users to connect from it
and this is a version of my blog now running through php-fpm in docker!! it seems to work perfectly, loads just as fast as the old version
i'm not switching over to this for the public ver yet in case i find some problems but this is very very promising
for once i can go to bed on a high note instead of an "ugh fuck this" note
restarted the container today & it came up on a different subnet so all my mysql config stopped working lollll
apparently you can configure it to use a specific subnet in docker compose, so, I did that, and now it works again
i've switched over the public version of that site to use it now, so if you want to poke around https://blog.12bit.club/ and see if anything looks broken then feel free
@lion this link in the sidebar is a white page: https://blog.12bit.club/?date=2014-02
@RavenWorks thanks. they sure are. hmm it's giving a 500 but not logging any errors which is annoying
@RavenWorks .. and I fixed it anyway
thanks for spotting that
@RavenWorks yknow what I think those have been broken for two years lol
the white page is saved in the wayback machine going back to 2021 https://web.archive.org/web/20211201052225/http://blog.12bit.club/?date=2014-02