Dev. Blog Post #0: The Media Gallery's Dead and Past Me (Accidentally) Killed It Slowly (Probably)

Dashmiel

Bearly In Charge
Administrator
Nexus GM
Oh, the STC media gallery is turbo borked and probably will need restarting from no images unless I learn to go CSI.   TL;DR on the pictured response from a Xenforo Developer: You're (As in I, silly webmaster who didn't notice when the break happened to pull the back up breaks) so fucked. Sorry, whatever happened, the Media Gallery just was never designed to handle whatever break in file links that is that done happened in your server, never mind manual recovery from the same right from the file system. Recommend you see about hiring a data recovery specialist to see about trawling the mess I fear your webserver is for a forgotten folder full of database back ups.


Oh, the STC media gallery is turbo borked and probably will need restarting from no images unless I learn to go CSI. TL;DR on the pictured response from a Xenforo Developer: You're (As in I, silly webmaster who didn't notice when the break happened to pull the back up breaks)so fucked. Sorry, whatever happened, the Media Gallery just was never designed to handle whatever break in file links that is that done happened in your server, never mind manual recovery from the same right from the file system. Recommend you see about hiring a data recovery specialist to see about trawling the mess I fear your webserver is for a forgotten folder full of database back ups.

The Media Gallery's Dead and Past Me (Accidentally) Killed It Slowly (Probably)​


So that quote is not the sign of a good day’s discovery. Some (dim) light and a challenge perhaps. Which I will of course attempt to do. Go CSI myself I mean. Eventually.

Nothing to lose (figuratively), and it's possible the server is already keeping compressed database backups in a folder within itself as a "default" 'fall back', back up system because the nice person who charged us "steeply"—let us be thankful youth and inflation both are self-correcting principles—to idiot proof this here web box years ago foresaw this rank amateur error years ahead of the time bomb.

That would be nice to find later. Might even be useful, as far as getting the media gallery data back where it belongs. See, it’s not gone you see. But it forgot the way home, and my past lax attitudes towards backup best practices ensured the wayward data was stabbed in the dark trying to get home, instead of having a well lit path back home.

It would be trivial to go search for them now, but I won't bother without worrying about the other half of the equation.

When I can, I may make time to see what I can try to attempt the task of restoring the possibly fragmented, attachment record-less, giant pile of unsorted weirdly formatted data files representing every image that has ever passed through any of the domain's image rendering or caching engines.

A pile that if taken file by file and viewed through the weird and quasi-arcane hacky "Universal File Viewer" client side app I happened to find that manages to render them images. But that is only opening one file at a time, through a click based GUI, and batch conversion ain't firing…

If I'm lucky, such a folder is already waiting with a copy of a database back up that matches the right time frame to plug the needed records hole, if possible. I did not want to bother the nice Xenforo developer with a stupid question whose answer, if answerable, arrives as self-evident.

If the attachment records can be restored from within a database backup (if one can be found) to identify the media images out of the collective big set, it can be fixed.

If not...

This collection of ramblings thought experiment impromptu developer blog doesn't assume it all turns out well.

Because It's a mea culpa, and a delusional sorta educated guesswork fueled postmortem. And 'Sorta' is all that can be achieved because you can't dissect the worst kind of dead data; the kind that only ever virtually existed.

There will be a complete reset of the Media Gallery system. I will be disabling it later today, and do not have an ETA on when a fresh version will be ready to go. Other image uploads like literally everything you do with an image that's not upload to a "Gallery Album" everywhere else on the site will remain unaffected.

Yes, yes, the devil can advocate on behalf of past me, figuratively speaking. I didn't know better, and thought a "safe" nuclear option might cover all the bases. The lack of imagination has been rectified, and experience since then calcified out of...somewhere. I digress.


Today You Learn Of [My] Madness And The Madness That Is Virtualization


From our server's point of view, it's not doing anything to do with our current backup recovery deployment pipeline.

The way I've been relying on taking backups is not the (read, our) server backing up the database (which primer on what that baby primarily does for our platform if needed: Underneath all the bells and whistles we come to expect of databases like holding the rows of data that tell the forum posts how to apply the bb code glitz, or teach the add-ons to flash, zig, and wig; It holds the actual text field data that populates our site, our main operational resource) periodically and automatically exporting that somewhere else, that output which can later be restored into any working fresh Xenforo skeleton installation with relative ease.

With the data folders that actually change often (like say...the full and intact folders with the media items for a media gallery with their associated hidden properties and symbolic file links which the database recognizes and brands as 'my data') being zipped up along for the simple ride to say, 1, 2, 3, or even 10 separate cloud file saving services for multiple redundancy.

That would have been too sensible and would even have worked alongside the current lazier layer of back up taking.

Seriously past me, whaaaat the fuck? A properly timed stray thought setting off this same set of musing in the hypothetical right when the server was freshly set up would have been nice, Admiral Hindsight. :clown:

No, instead I just rely on the Linode Backup Service—which is the automatic job running on the VM above our Virtual Machine in the nesting Turtles-All-the-Way-Down nature of shared hosting—which takes a rolling series (two weekly slots, one daily slot, one manual slot, Monday backups elevate to replace oldest weekly slot) of 'snapshots' of the server as "the back up solution".

Snapshots of course being aptly named; The Real Root User (or at least the Root of a level up, which equals the same to our lowly make believe self-existing webserver) says freeze and the state of the server as is at that moment is captured in time, to be later maniacally relabeled as the objective "Now" server state without question whenever the actual 'Now' server state is deemed unworthy.

Limited control, but powerful and unassailably guaranteed to deploy backups...which will happily carry over any non-critical— i.e. if it doesn't kill the nginx webserver service or anything after it's path northwards from /home it's not a wound at the level it cares about—errors.

Like for example, missed dummy folders being generated by an updater function that hangs but for whatever reason the 'fail gracefully' condition does not include removing the dummy folders, or a sanity check, or an extra spicy mark denoting what did get [Unneccessarily] completed before we started erroring out within the otherwise mundane logging...

Alright past me, you were not prepared back then to expect what you perceive as a fault in commercial software to be considered SOP. Tbf, ego check; you still don't, not really. Radical acceptance time: We are the User now, and only users can define "necessary". Ignorance is bliss and the fine print does say you're not protected from yourself if you don't know what you don't know. So maybe lets tweak the logs across the board sometime to self-fluff and dramatize themselves?

That "example" error, being a not-process-killing error, could have carried on sfor likely years. Couple of legacy folder structure renegades, rolling around the file system document tree.

Eventually the update function run again, and it creates and then properly destroys two dummy folders...but it does so after renaming the two weird folders it found already named what it wanted to name the place holders for some of the legacy format's meta data and temp storage during the execution of the update...

Meh, it's fine still at this point. The PHP callbacks are somehow still phoning home. The orphaned data (a couple of random folders that made it through the initial copy attempt before that 'Updater Failure' of back when, including a still intact shrek meme video that serendipitously seared itself as this process' sad analog to a stopped clock after a nuclear detonation marking the time the reaction indelibly left it's mark) is over there, the new data is being saved here.

The relativity between the two is expressed as the ambiguity in function names somewhere in an interlinked mess of symbolic links and nested callbacks crying out in:

Code:
 [If Expected Data Format then proceed to next function, and *function* :Dance:]

within a slithering, frolicking, surely platonic knotted tangle of '010101's in the shape of bit snakes or whatever shape it is that a file's data arranges itself in whenever there's no hypervisor around to hyper-visor it.

All is well, until our server ceases to be. It’s pattern paused, and migrated to new hardware. The same files, code call backs, all the blood and sinews of the site begin moving after being metaphorically teleported a la sci-fi molecular reconstruction way.

Except the gallery fails, and the ghosts that held it up left no imprints that I was ready to catch because of the aforementioned lack of imagination in mitigating problems through staggered and intelligent backups with more than just one method.


Next time, we’ll do better.
 
Back
Top