Okay I got too into all this and it has become 2:30am whoops
Made pretty good progress though. I had a big list of todos and problems and questions from last time & I've got through all of them except for some PHP extensions I still need to install
Then if that all works I can try it on the actual server.....
was looking into how much free space i have on my VPS before i do all this & found that i'm using like half the 40GB available
and of that like 4GB was /var/log/journal so i configured it to use max 100MB because i never checked those logs in my life?
another 4GB ibdata1 in mysql which .......apparently is mostly accounted for by my notes wiki db WHY there's barely shit in there
I Think what happened was the first script purged the references to the revisions in the "archive" table, meanwhile the second script is looking for references in the "content" table which was not actually touched by the first script
so uhhhhh I guess I can try and identify orphaned content entries myself and delete them and then run the second script and hope I don't fuck everything up
How did I even get here this has nothing to fucking do with Docker
OK so what I figured out is that revisions are linked to content by the "slots" table, and the "slots" table suspiciously has 2888 entries referencing nonexistent revision ids, which is the exact same number of archived revisions that were purged
So I THINK I should be safe to delete those slots and Then delete the content entries corresponding to those slots and Then delete the text entries corresponding to the content entries
Sorry for posting through this btw
well my internet went down but i'm tethering off my phone to fucking finish this
I deleted all those orphaned slots/content entries and ran the script. the script actually found the orphaned text this time but threw an error when trying to delete it. so I ran another sql query to delete the text
THAT failed cos it ran out of space while trying to write a temp file so I'm now doing them 100 at a time
We did it gamers. Deleted all the spam text. And how much space has that gained me? fuck all!!!!! because InnoDB will never give you space back once it's used it, if you have the "file per table" setting turned off (which is the default) it just dumps everything into that fucking ibdata file which can never get any smaller. The only thing you can do from here apparently is to dump all the DBs, wipe the whole thing and reimport them. I'm going to fucking bed
Realised why that was happening, the table dump actually had TABLESPACE `innodb_system` in it, which caused the reimported table to always go in the massive ibdata file. Have converted everything to file per table properly now. But that was still kind of a futile exercise considering theres still no way to shrink ibdata without dumping, deleting and recreating all the databases. And I'm just not gonna do that until I fucking need to because I have plenty of space for docker atm
This seems bad??!?!? I very nearly misconfigured my container in a way that would have accidentally left a port exposed, but I thought "oh well it wouldn't have been _too_ bad cos the firewall would block it anyway", but apparently, no, docker just says Fuck your firewall
so I got the bastard working!!!!!!!!
I think my main misconception was that the docker internal host thing was like, magically mapping things to appear to connect from localhost on the host so I wouldn't have to do anything special to allow them to connect
but that's not the case, the container connects from its own IP address so I had to allow that through the firewall & allow mysql users to connect from it
and this is a version of my blog now running through php-fpm in docker!! it seems to work perfectly, loads just as fast as the old version
i'm not switching over to this for the public ver yet in case i find some problems but this is very very promising
for once i can go to bed on a high note instead of an "ugh fuck this" note
restarted the container today & it came up on a different subnet so all my mysql config stopped working lollll
apparently you can configure it to use a specific subnet in docker compose, so, I did that, and now it works again
i've switched over the public version of that site to use it now, so if you want to poke around https://blog.12bit.club/ and see if anything looks broken then feel free
I put https://hhug.me/ on it too
2 more subsites switched over https://repo.12bit.club/ and ye olde http://fuji.12bit.club
the only one remaining (I think) that hasn't been switched is http://notes.12bit.club which didn't work with the config I used for the others & I'm not surprised tbh cos it runs mediawiki which is the most complex software running on any of them by far
alright I've switched http://notes.12bit.club over now, I solved like 3 problems it was having but it is much more complex than the others so again if anyone wants to dick around with it and see if they can find anything broken i'd appreciate
and with that I Think all my public facing websites that use PHP are running it through Docker now..!! Still a few other things remaining like my Twitter bots.... which I'd also like to get working on non-twitter sites sometime...
(reposted bc wrong link)
@lion how are you running mysql, are you using one of the existing docker images for it?
i ask bc i know i've had a lot of confusion before when trying to set up mysql manually because sometimes it just doesn't bother with the whole networking thing
@Ninji nah i'm not running mysql through docker at all, it's just an existing setup on the host machine, i'm trying to connect to that from inside the container
@lion when you connect to it outside of docker, do you know for sure that it's accepting connections over TCP and not just using the unix socket?
@Ninji no, how do I find that out
@lion i don't have a running instance to check this against, but a good first step might be to just try `telnet localhost 3306` and see what happens
if it's refused or timed out then that's a good sign something is probably sus
check that you don't have 'skip-networking' in the server's config file, also
@Ninji that did work yeah - but I also realised i need to actually let the container connect through the firewall & I'm getting somewhere now
like i'm now at the point of getting normal mysql access denied errors, i can work with this
@lion ah, excellent
check your privileges - usually mysql grants have a user *and* host assigned, so if it's something like 'taizou@localhost' or whatever then that might fail
@Ninji Yeah I did indeed have to do that
and it works!!!!!!! thanks for your help
@lion this link in the sidebar is a white page: https://blog.12bit.club/?date=2014-02
@lion actually all the date links are
@RavenWorks thanks. they sure are. hmm it's giving a 500 but not logging any errors which is annoying
@RavenWorks yknow what I think those have been broken for two years lol
the white page is saved in the wayback machine going back to 2021 https://web.archive.org/web/20211201052225/http://blog.12bit.club/?date=2014-02
@RavenWorks .. and I fixed it anyway
thanks for spotting that
I still can't resolve this mysql issue though. So like for background I have extra_hosts: - "host.docker.internal:host-gateway"
configured in my docker-compose file & that's supposed to map the host machine's internal network to the hostname docker.internal.host inside the container
And that works, for some things, e.g. I can ping it, I can connect to its web server port. But not for mysql! It just times out! I don't know how to debug this!!!!