Migrating keys

Permanent link | Tags: freenet, flip, fms, wot, hax

I've been running Bombe's vanity key generator for a few months now and finally hit a pubkey that starts with Seeker. So I'm going to try to migrate everything to the following keyspace: SSK@SeekeroGzFbr-PvztZAfhEOTQ96DJK2VE21B2q8Ff3o,ZdttbVq3XAb24k01tTE2DA3oX5MxcVl83IiiKLuHsDk,AQACAAE/

If everything goes well, my Sone and FlogHelper data should all migrate fairly easily with some manual editing of property files. I don't think I'll be able to hijack FMS' posts like I can hijack Sone, but I should be able to at least start posting under that new key if I remove the old one. I'll also be editing my FLIP ID in the near future to match as well.

KeepAlive 0.3.3.7-TS

Permanent link | Tags: freenet, plugins, keepalive

This version of KeepAlive is for those running the latest purge-db4o snapshot (1468-pre1 or later), or everyone after 1468 goes live. (1467 ended up being a minor maintenance release)

Fixes:
Close buckets to stop leaks. (Now without causing NPEs! ... hopefully :|)

No new features.

Source: keepalive-0.3.3.7-TS.zip
Jar: KeepAlive.jar
Repo: GitHub

KeepAlive 0.3.3.4-TS

Permanent link | Tags: freenet, plugins, keepalive

This version of KeepAlive is for those running the latest purge-db4o snapshot (1467-pre1 or later), or everyone after 1467 goes live. I haven't really been working on this lately, so I'll just release what I've had sitting around. This is the same as what I'd posted a while back on FMS.

Fixes:
Comply with API change.
Add some code to try to work around and chase down a strange stalling bug.

No new features.

Source: keepalive-0.3.3.4-TS.zip
Jar: KeepAlive.jar
Repo: GitHub
This version of KeepAlive is for those running the latest purge-db4o snapshot, or everyone after it goes live. (see toad's flog)

Fixes:
Fix the 'no metadata' problem that was keeping files from having their metadata parsed for no apparent reason.

(not so)New Feature:
Forgot to add that I had added checking for existing keys when adding files, so no more dupes.

Source: keepalive-0.3.3.2-TS.zip
Jar: KeepAlive.jar
Repo: GitHub

KeepAlive 0.3.3.1-TS

Permanent link | Tags: freenet, plugins, keepalive, programming

This version of KeepAlive is for those running the latest purge-db4o snapshot, or everyone after it goes live.

Fixes:
Tons and tons of changes to make it work with the API changes in fred.
If the top block was fetched but the manifest failed, the file was considered to only be 1 block and never tried to get and parse the manifest. Since single block files aren't supported anyway, recognize when only one block is associated with a file and reset the status to treat it like a new download.

New Feature:
Mutli-line key entry box. You can now copy/paste large blocks of keys into KeepAlive.

Changes:
Upped the default test blocks from 12 to 18.
Upped the default factor from 5 to 6.
Changed fetch algorithm.

Old: round-robin between requests. start with the first segment and fetch every segment until finished, or a full segment fetch showed sufficient availability. created an issue where later segments of large files were tried a lot less than early ones.

New: round-robin between requests. start at the last completed segment + 1 (or start) and then move on to the next file unless the segment needed healing or failed. (if one segment needs healed, the next probably does too)

Source: keepalive-0.3.3.1-TS.zip
Jar: KeepAlive.jar
Repo: GitHub

Spent more time hacking on KeepAlive. Got rid of the early-segment bias that existed in my previous builds (for rather large files) caused by short-circuiting the rest of the file if a segment is fully fetched and found to be OK. A file getting close to the threshold value will tend to fetch the whole segment and succeed early on, preventing any of the later segments from ever being fetched.

Code now fetches a single segment from the file, and if it tests OK, goes to the next file. Else, it fetches the whole segment, and of that's OK, goes to the next file. Else it re-inserts failed blocks and goes to the next segment in the same file. (more segments are likely to need healing if one does) Instead of restarting at 0 on the next pass, it starts at the next segment in the file.

Stats are only written if the segment that was last completed was the final segment in the file. This was a major annoyance with my earlier code, and required me to make dummy functions that filled out the stats when I skipped the end segments, else it would claim that the file was only a few percent available. With the new code, I can get rid of those fixer functions, so more logic -> smaller filesize!

No build for now because it won't work with the current testing jar (more fred API changes) but this is probably all I'm going to do on the plugin for now, assuming it holds up during testing. One possible 'fix' I'm still considering is re-inserting the top block each time I come around to a file on the queue, instead of only when starting back over from the beginning. An alternative fix would be to store the top block locally, and re-insert it each pass, rather than trying to fetch it and then re-inserting.

reminder: next version will also have a mutli-line input box. I'll be releasing the new source/jar when purge-db4o goes live.

Keepalive wip

In case you've been living under a rock all summer, toad has been hard at work ripping the tentacles of db4o from Freenet's various code paths. On the one hand, this is great. No more database corruption ever, woo! On the other hand, it introduces some pretty drastic changes to some of the APIs and alters a bazillion function calls. (no more 'ObjectContainer container' args)

Inserts and requests were changed quite drastically, and I had to spend a good chunk of time resolving conflicts with my own hacked requeststarter. (Due to adding a third priority scheduler, called with a short instead of bool, huge mess.) Some of the notable features are:

* Temp files for downloads that were inserted uncompressed are stored in the target directory, with all of the metadata about the download at the end. Once complete, the file is merely truncated to proper length and renamed.
* In addition to saving a TON of disk I/O, it also means you can often preview large files as soon as at least one segment completes.
* Compressed downloads still reside in persistent-temp until complete since they will require an unpacking phase anyway.
* Warning: Downloads will take up all of the space required for the final blockcount (plus metadata) immediately upon fetching the manifest, so don't start downloading a file if you don't have the free space *right now*
* The data structures being serialized to disk are very robust against corruption, and can recover from having some bad data without losing the good along with it. Much better for running on flaky consumer hardware.

Another thing that was drastically overhauled was FEC. It's all done in-memory now, and is done via a synchronous call, no more FECQueue or pending DBJobs. KeepAlive does a lot of low level operations on keys, including manually doing FEC calls on individual segments as needed, so this change was a bit daunting at first. Spending a day looking at the API and examples of how it was being used in Fred really helped though, and the resulting replacement code is much shorter and cleaner since there's no callbacks or loops waiting for a variable to indicate completion, just a few arrays that go into FEC and come out the other side with data filled in where needed.

Getting Sone working with the changes didn't take nearly as long, it was mostly just implementing a RequestContext and removing references to db4o related things and using RandomAccessBucket instead of just Bucket for certain data being sent to/from the node.

Floghelper was another case that required RTFSing in fred to get my head around the API changes. Where one used to do nodeClientCore.queue(new DBJob() { /* insert run func here */ }, priority, bool) now one does nodeClientCore.clientLayerPersister.queue(new new PersistentJob() { /* insert run func here */ }, priority) instead. If you can read this, I was successful at getting FlogHelper to work again.