Streaming Consciousness
Too Many Shortcuts

Back when computers were physically large, virtually small and GUIs were primitive a major selling point of one piece of software over another was how many mouse clicks were required to get things done. The most lauded software had keyboard shortcuts for most of their functionality so it took zero mouse clicks.

Keyboard shortcuts were thus burned into the minds of developers as good things. Unfortunately there is too much of a good thing. These days my major problems are not that there are too few keyboard shortcuts, but too many. In basically every major piece of software I use on a daily basis, including my terminal editor, pretty much every key on the keyboard is a shortcut to something. Every program wins at the mouse click game these days. But they all lose at the typo game.

Several times a week I'll typo a shortcut. Perhaps this is because I forgot which terminal emulator or web browser I'm in. Perhaps I just wasn't paying attention to where my fingers were on the keyboard. Perhaps something else brushed the keyboard. Whatever the reason I hit the wrong keys. Hopefully nothing too terrible happened, like closing that window or program out from under me. This is all very annoying.

So here it is. I like keyboard shortcuts. I really do, there are a few I use hundreds of times everyday. Then there are the few I use once or twice a week. Then there are the rest I only use when I typo them or to undo a typo'd shortcut. While I like the ability to have the shortcuts I do use, I wish very few shortcuts where enabled by default. I don't use most of them any they only get in my way.

What I Want to Do

Finally we get past the groundwork and get to what I'd like to do in the near future. Obviously the reasons for me wanting to do this are due to the grim future ahead.

Without beating around the bush, I want to build a self supporting, but not self sufficient, estate. The scale of this is still undecided, it could be a medium sized homestead on a dozen acres or it could be the core of what will eventually be the estate house of a thousand hectare fiefdom. How large it is depends on many factors, but mostly cost and how much money I end up having to put toward the venture.

Now in thinking for the future I want to build it differently than most would. This means building for durability and minimal maintenance. This means building with whatever modern tool is the most appropriate, but designing with an eye towards maintaining and working using only hand powered tools. If I'm going to be powering the whole thing using nothing but my back then energy efficiency is critical. This means getting a bit outside the norms.

The first major departure is that I want to build the house primarily out of stone. Stone walls outside and at least stone walls on the interior perimeter. Probably some significant insulation between them. In theory a modern stone house is cool in the summer, easy to keep warm in the winter and will pretty much last forever as long as you pick the moss of on a regular basis. Just what I'm looking for.

Now energy efficiency and self-supporting means burning biomass for heat. In my area of the world this mean wood. Apparently the most efficient wood burning stoves are the Russian masonry stove designs. So the central pillar will be such a stove. I've heard that you can coppice Willow trees for easy use in these stoves so I'll probably give that a shot. I'd also want to try bamboo if I could get it to grow because growth rate is really what matters. Normally bamboo burns too quickly and too hot for effective use in a wood stove but you are supposed to have a hot quick fire in the masonry stove so it might be a good fit which doesn't require a lot of labour to harvest.

On the topic of heat one needs hot water and I envision an on-demand waterless propane water heater which is fed water preheated solar heated water. In the summer the propane heater would probably not be needed at all. So even after propane becomes too expensive there will at least be seasonal hot running water for cleaning dishes. I can't recall where I saw it, but there is apparently a normal setup somewhere in Europe where cupboards have built in drying racks where dishes are stored after washing. This seems pretty brilliant so I'd want to replicate that. Though with the addition that this is built into one side of the masonry heater to ensure that the dishes are always warm during the winter. I've been experimenting and this makes a surprisingly large difference in eating during the cooler months.

At least in North America you don't tend to see mud rooms or large entrance ways in houses anymore. This house will have one to help keep the cold out during the winter and to have a good place to put coats and boots. I find too many homes these days economize on that and it just makes a mess.

I expect the house will have a proper cellar which will be used to store food and the like. Similarly I believe I want to put in a cold pantry, that is a pantry which has cool air from the cellar automatically blown into it in order to maintain a temperature slightly higher than your standard refrigerator. There will still be a standard fridge, but I am thinking ahead. Similarly there will be a, probably propane, oven and range, but also a wood oven and cook top built in as part of the masonry stove, though with a separate firebox. Modern conveniences with reliable backup.

I think there will likely be four levels total: cellar, ground floor, second floor and attic. The top two floors I would design primarily as bedrooms. I have a moderately large family so if the worst should come I want to have enough space to house them indefinitely in only moderate discomfort.

Perhaps the biggest questions about the house proper I still have to think about and ponder are what to do about bathrooms and if a single masonry stove will be enough. The primary reason this is even an issue is because I want to use the masonry stove to be more than just the centre of the living area. As mentioned above I also want it to be a cooking tool and used to dry and warm dishes. This would traditionally mean using it as one wall into the kitchen. That's fine enough, but the difficultly arises when it is time to consider heating the bathroom(s).

I've been experimenting with living with less heat than is normally considered livable in this time. For the most part it isn't that big a deal and there are simple accommodations one can do. However, bathrooms present a problem. The normal accommodations usually involve putting on some heavier clothes and applying electric heating to your person. Unfortunately neither of these is suitable for a bathroom. In any case the towels never really dry. So far all I've been able to figure out to fix this is to keep the bathroom heated to late spring temperatures. The towels could be handled by a heated towel rack, but that'd still leave the bather cold and wet. Thus with this house my current plan is to have the bathroom(s) have one wall be, possibly partially, the masonry stove itself. Thus the bathroom would always be warm and towels would dry quickly.

Wanting to use two walls of the heater for something other than heating the main living space presents a problem. You end up with very little masonry heater for the rest of the house. Thus the question about putting the bathroom(s) on the second floor where the heated spaces are smaller or having two masonry heaters which gives more wall to use. The only problem with two masonry heaters is that it makes the house significantly bigger and takes more effort and fuel to heat.

More or less this house would be designed to operate in two modes and the transition between them. Specifically, it should be considered a modern, if idiosyncratic, house full of the modern amenities one would expect today. It should also operate just fine as a post-petroleum building, with all the amenities appropriate technology can afford. The plan would be that I slowly build this house and the surrounding land over the next decade or so. At times I would live in it full time seasonly, working remotely. Eventually the plan would be to live there full time doing what work either locally or telecommuting which I can. When others, who I haven't explained the grim future to, ask me about it I tell them I have decided to start building my retirement home early.

So as you can see this is by no means fully thought out, though it is certainly all possible. And yet it is what I want to do as my longterm preparations for the grim future.

What I am Doing Now

So why am I telling you all this? Why have I put more than thirteen thousand words onto a page? The reason I've done all this is I have been slowly making changes in my life to prepare, in some small way, for the decline for a couple of years now and I'm tired of not telling people the real reason I've been doing it. So the plan is today I'll discuss the things I've been doing and then in my next post I'll start discussing my plans for the future as they stand today.

Basically the things I'm going to talk about can be put into three categories: investing in very durable goods, learning to use less energy and preparing for the end of the always on Internet. All of this is under rational under the simple assumption that energy will, on the whole, get more expensive, relative to my income, to buy as time goes on. Ignoring the recent drop in oil prices for the moment as it hasn't shown itself to be a long term trend and honestly doesn't seem to have reduced fuel prices in my locale at all, this is a reasonable assumption given the history of the last five years or so.

Buying very durable goods is perhaps the easiest to explain. More or less, whenever I find I need some durable good, a microwave or table or vehicle, I do additional research and pondering and saving. The goal of this is to buy a durable good which will not only last many years, but also last me many years. That is, the table needs to be durable enough to last a couple of decades or more, but also of the fashion and size that I won't tire of it or have it no longer meet my needs in a handful of years. So I buy to last and I buy to keep. This unavoidably results in me having a mix of cheaply made items, where I haven't found what I'm looking for yet, and rather high end items, which appear to be pretty timeless.

Now, this buying for function over form does raise some eyebrows sometimes. For example, when it came time to buy some tables I saved and went to a custom manufacturer. So I have some really nice tables which are exactly what I want and should last just about forever. In contrast, however, I drive a less than pristine ten year old pickup truck. It is by no means nice, but it gets the job done and when the time comes to discard it I won't be terribly displeased. Other examples include paying about three times for a printer what the cheapest model would have cost, but also habitually avoiding eating at restaurants. It can look odd to buy really good stuff and pretty crappy stuff but not much in between.

Another way it raises eyebrows is by being different. Because I put so much research into the current and longterm suitability of what I buy I often end up buying things that are just outside the mainstream. A good example of this comes from being comfortable during the summer heat. Where I live it doesn't tend to get too terribly hot during the summer, low thirties Celsius at the most, but when it does it tends to be pretty humid. Faced with this situation and working at home at the time sweating my keyboard off most people would buy a small AC unit. Not me, instead I chose to by a dehumidifier. The logic being that I've been perfectly comfortable in pants and a shirt in heat near forty degrees simply because I was in a low humidity desert. And the dehumidifier worked. I was comfortable, though I had to drink a lot of water. So by thinking deeply about my purchases I often end up getting slightly strange answers to common problems because I think I've found something better. Often I'm not wrong, but have made the trade offs differently in a way which works for me.

Learning to use less energy is simple in theory, but can be hard to understand. Most people think that using less energy is just something you do. Buy better lightbulbs, turn down the head, that sort of thing. And to a point they are correct. But past a point you don't have any low hanging fruit so you have to make modifications to your life in order to use less energy. Now of course few people want to go live in an unheated hovel just to say they use less energy and neither do I. This is where the learning comes in. You have to experiment in using less energy and yet keeping comfortable. This might mean learning how to most effectively use public transit (use only for commuting to work) or how to arrange your lighting to need the fewest lightbulbs to light the area you care about. A big one is learning ways to preserve and optimize your use of heat.

Preserving heat is just learning the best ways to keep heat within the area you live in. This is not just adding insulation to the house, but learning how to use curtains and how to seamlessly change where you do certain things seasonally. It sounds pretty simple in theory, but in practice there are many variations you really need to try to find the best balance between effective and easy.

Optimizing your use of heat specifically and energy in general is the more challenging of the two parts. We are all told about preserving energy, but very little in the developed world about optimizing our use. Since heat at home is such a good example I'll stick with that example. Preserving heat is just keeping heat within your home. That means it's fine to go ahead and keep the thermostat set high all day every day as long as you have enough insulation. But you can optimize to do with a lot less heat than that. Obviously dressing up in sweaters and thick socks is one way to remain comfortable with the temperature turned down, but it doesn't cover everything. Such as, if you turn the heat way down, how do you keep food plates from sucking all the heat out of your food? What do you do when you are sleeping?

Using less energy is a learning experience as you try new things. This is one thing which will probably get easier as energy gets more expensive, but today it's pretty tough and gets you some strange looks.

Finally we have dealing with the changes to computing which I expect. This means more expensive computers and a return to the part time Internet. This is where I'm making the most progress really, mostly because much of the technologies which built the part time Internet still exist in modified forms, such as email. More specifically I spend some of my hobby coding time building distributed solutions to common problems in computing. Things like a bug trackers or continuous integration system or terminal multiplexing over slow and expensive links. Though I claim otherwise on the clear web a major reason I am building these things is to make the transition easier to a world where no one can afford to be connected by a multi-megabit link all the time to the entirety of the Internet. Instead I'm building tools to be used when you only have a couple hours a day at 128kbps and need to get as much done during that time as possible. This takes time, but I'm a developer by trade so it's going reasonably well. At least it's going much faster than the decline of the Internet so far.

And that's the summary of what I'm doing right now to prepare for the grim future I foresee. It's not drastic lifestyle changes. I've not given up the modern world for hemp shirts and muddy fields. I still watch cat videos on the Internet. I'm just slowly hedging my bets.

What Does This Mean

All the posts in this series so far have been pretty abstract. They've all dealt with global, civilizational or society level effects in general terms. None of that is terribly useful to understand, on an individual level, what all this means. What does a world with less energy look like in the day to day life? Now the future hasn't happened yet so nobody can be perfectly sure about the details, but there are a few generalities which are highly probable to happen.

An important technical note. All of my numbers below are broad generalities and all of them are in real terms. That is, the order of magnitude of the numbers is probably correct, but not the precise number itself and when I describe a cost you can imagine it as if you had to pay that cost out of your current pay cheque. There are several things which could happen to the value of money but they have been ignored below by dealing with approximate hours of work. That is, how man hours of work it takes to afford it. Consequently you should interpret any numbers as if the price of that item increased that much today. As an example, if I were to claim that driving would become ten times as expensive and you currently spend $300 per month insuring, fuelling and maintaining your car you should consider how you would afford $3000 per month in order to use your car in the same way as you do now.

The most obvious effect is the cost of energy slowly and unsteadily increasing to ten times the current level. This is a pretty obvious outcome of more people bidding for less energy. What's less obvious is what effect this will have on the life of the average person.

The most immediate effect of increasing energy prices will be the extra cost of energy intense activities, such as heating or driving a car. This is bad news for those who live in large or poorly built houses and for anybody who is not within bicycle distance to work. We could expect the cost of these to slowly increase to ten times the current level over a few decades. At that point nobody but the rich will still be driving anywhere. Similarly the cost of a taxi or public transportation will increase equivalently.

Governments skim a bit of energy off the top of every private transaction in order to pay for the various public services. In a world where everything costs at least ten times as much due to energy costs and all the individuals within the economy have significantly less surplus to pay towards these taxes, government services will be barely functional. It is quite likely that the most expensive optional government services will be reduced to bare minimum levels. This includes things like healthcare, education and benefit programmes. There will likely still be modern, hightech hospitals running, but they will be privately funded and, to the average person, unaffordable. Critical services like police, firefighters and public transit will be overwhelmed and underfunded.

With the cost of the energy put into manufacturing new item you would expect the price of new good to increase, and you would be correct. You might also expect the resale price of old goods to increase comparatively well. Unfortunately things are not quite as simple. Where you have used items which are more or less identical to the costly new items or which are no longer producible, say because the necessary economies of scale have collapsed, then the price will increase mostly in line with energy costs. However, everything else will drastically decrease in value. At least a factor of ten decrease should be expected for a wide range of assets. Housing prices will drop drastically as existing houses are determined to be impossibly expensive to keep up. There will be such a glut of used household items, tools and furniture that they will hold nearly no value. Financial assets will quickly become worthless as its determined that the prosperity they were based upon no longer exists. Expect any investments which aren't in directly productive uses to become worthless in short order.

In short, the cost of things you have, savings accounts and cars and houses and your wedding china, will go down by a factor of ten while the cost of the things you want, food and heating fuel and bicycle parts, will go up by a factor of ten.

In a world like this people are going to have to make some hard choices about where to allocate their limited resources. Products and services they buy today but cease buying because their value is too low will be hit hard and shrink rapidly. It will be a bad time to run a pet spa. More pertinent will be the effect on the Internet. The Internet, as it is currently structured, is quite expensive to run. A person requires a computer which costs on the order of $1000 connected to some networking equipment which costs on the order of $100 and they need to pay on the order of $100 dollars a month to gain full time, high speed access to all parts of the Internet. This is in addition to whatever electricity costs they have to spend to keep those machines running. On the other end you have very expensive and power hungry computers and networking equipment run full time to service the requests. Much of this is supported by advertisements or service fees.

Applying the 10 times estimate we've used so far we can see that the Internet as we see it now will become unaffordably expensive for the average person. At least, if used as they do today. I believe what we will see instead is a greater emphasis on sporadic connectivity. That is, when not actively in use the computer will be off. In order to best split up the cost of the, now quite expensive ~$10000 computer, several people will share that computer. Additionally, time connected to the Internet will be minimized to allow better sharing of the network resources and consequently cheaper connectivity fees. More or less I expect a world similar to the dialup world of the early 1990s. People will still own computers, but instead of many per household you'll see one. It will be used much less once there are no more videos to watch or music to download. Personal computers will get simpler and more robust, but at the same time somewhat less capable than the computers used for business. Connecting to these more powerful and capable computers will happen more sporadically and in bursts as short as is feasible. This means doing more work locally and then uploading it in bulk to your ISP. Similarly people won't be able to afford to casually surf the net and will instead bulk download much of what they want to read two or three times a day. The Internet will become a slower place, but will still hold a useful place within society.

With all these businesses which sell luxuries, such as pet spas or ice cream, losing significant parts of their business and going out of business we can expect the economy as a whole to be doing poorly. Many will be out of work or only work irregularly. Consequently anything but the most skilled labour will be poorly paid and have no job security.

It is worth going into a bit more depth about what the loss of economies of scale will do for certain products. Certain types of products, mostly highly technical or based on ingredients which are not normally available in the same season, tend to depend on economies of scale for their production somewhere down the supply chain. Processed foods and anything with an integrated circuit in it (nearly anything which uses electricity these days) will likely see their costs increase disproportionately as the economies of scale which make fast long range shipping and massive manufacturing facilities economical. As these economies of scale disappear we can expect a factor of ten increase beyond those which will come from energy prices. As these luxury products (do you really need a smartphone to survive?) are priced out of reach of those who don't make money using them the economies of scale will reduce even further and so it would not be surprising if, within a few decades, the costs of these types of good went up by a total factor of 1000, if they are available from normal production runs at all.

Living in the world I've described will be challenging, but it's not all dark. There will still be friends and family and sunny summer days. Consumerism will not survive, but humanity certainly will.

Why There is no Salvation in Technology

Since I've started this series of posts there has been a bunch of good discussion about it in the comments. One point that keeps coming up is the claim that my entire thought process ignores this technology over here which is just about ready to be put into mass production or that technology over there which looks promising and might be ready in ten or fifteen years. Or how previous energy transitions went fine so we have nothing to worry about during this one. I've tried to explain my position in the comments, but it's too long to lay out my entire argument and the discussions tend to get sidelined into discussions about the particular technologies brought up to argue my point. I'll take a slight detour from my original direction of posts in order to explain why this energy transition is different and why technology won't be able to prevent us from hitting a few decades of grim future.

I want to begin by making it clear that I am by no means anti-technology or ignorant of prospective energy related technologies either ready now or in development with an eye to being ready in a decade or two. To help dispel thoughts that I'm a Luddite in the traditional meaning of the world I will briefly summarize my position on the common contending technologies. First the ugly. It's been a few years since the "hydrogen economy" has been popular, but it seems a good place to start. Hydrogen fuel cells are not an energy source, but instead they are a fancy battery. Now batteries are useful, no doubt, but what matters is where the energy comes from to fill those batteries. Except for niche applications hydrogen fuel cells are a failed technology because hydrogen is too expensive to extract from materials you can't otherwise just burn and hydrogen has this nasty habit of leaking through any container you can make.

The final stake for hydrogen though, is synthetic fuels. That is, taking energy and some form of carbon and turning it into synthetic diesel, gasoline or ethanol. This is also more of a battery-style technology than an energy source, but it avoids the need to expensive catalysts and high-tech fuel cells and instead uses well understood and mass produces internal combustion engines which we already have. The energy cost of doing this is on par with hydrogen, but the synfuel doesn't leak through any container you can make.

Fusion is a popular energy source to roll out when talking about the future. At this point given the length of time and amounts of money which have been put into fusion I don't believe fusion on a human scale to be economical as a power source. Certainly fusion works on the scale of the sun, but we can't put a sun on Earth.

Uranium fission is a well tested technology. The old style reactors, such as those producing power today all over the world, are well understood and barely economical if you account for cleanup costs. Unless we are willing to create a sacrifice zone when a nuclear power plant is decommissioned every fifty years or so nuclear power is just too expensive. Additionally, building a nuclear reactor takes a long time, at least five year, but often more than ten. The importance of build out time will be discussed later. Finally, we don't seem to have enough uranium. All the estimates I've seen indicate that we have about one hundred years of supply left, at current usage rates. Obviously if we increase our use of uranium fission by ten times we'd only have ten years, not one hundred.

Now it is true that current uranium fission reactors aren't terribly efficient when it comes to their use of uranium. The waste from the reactor tends to still have 95% or more of the original uranium still present. There are types of reactors which produce less wasted uranium and have less dangerous waste as a result. These breeder reactors are generally not used for fear of nuclear proliferation. Breeder reactors are a promising avenue, though they suffer from many of the same criticism as traditional uranium fission. That is, they are slow to build, expensive to decommission and have some safety concerns. Using breeder reactors alone doesn't solve the uranium shortage though. Looking at page 41 of this summary of world energy use in 2013 we can do a little bit of arithmetic to determine a few things. The first thing we can determine is how much of the world's energy is currently provided by fossil fuels, 86.6%, and nuclear power, 4.4%. If we were to replace all fossil fuels with traditional uranium fission we'd have to increase nuclear power by a factor of nearly 20. At those rates the currently known uranium would only last five years. If we instead used breeder reactors which left only 2% of the uranium instead of 98% as traditional reactors do the math results in a uranium supply lasting, at today's usage rates, about one hundred years. Which sounds pretty great as long as you are willing to live in a world with 0% annual increases in global energy use. Looking at the same table we can compute that energy use grew about 2% between 2012 and 2013 and I wouldn't precisely call those boom years for the global economy. Zero energy use growth would be an even worse economy, so you can expect an exponential depletion rate reducing that one hundred year supply down to fifty or so.

Still on the topic of nuclear we have the various proposed designs for low pressure reactors, mostly based on thorium. I'm pretty bullish on these reactor designs. There only problems is that they are unproven to date and probably take a lot of money to decommission. This is massively better than any of the other nuclear options, but there is still that nasty issue of time.

That pretty much covers all the non-renewable energy sources we could turn to. The renewables as group, excluding hydroelectric, have one major problem: energy storage. I'll discuss the precise productivity of the major renewable sources in a bit, but even before then there is the issue with energy storage, that is, storying energy when we have too much to use when we don't have enough. More succinctly, using these natural sources of energy outside the natural rhythm. The most common example is solar. About half the day it is sunny and you get solar power. The other half of the day it is dark and you don't. So you need to store energy from day into night. Unfortunately energy storage is too expensive today and most of the solutions we have don't scale. Often a renewable energy source with an acceptable EROEI on its own will be made unacceptable when the costs of energy storage are taken into account. In places where traditional hydroelectricity is in use it is possible to do hydro-storage, that is pumping water up into the reservoir when you have excess power. There are relatively few places where this can be done since some semblance of consistent water flow must be maintained along the natural rivers which are dammed for ecological and economic reasons.

Obviously traditional hydroelectic is an exception to this rule. If you don't need to power just leave the water in the reservoir. Unfortunately there isn't much opportunity for expansion of hydroelectric power because most of the really good spots are already generating power. Run of the river hydroelectric, while good at small scales able to power one house, have severe issues of environmental damage and consequent silting of the piping when done at industrial scales. So hydroelectric isn't going to save us.

Wind either suffers from a limited supply, when done on land, or expensive maintenance and power transportation, when done on the ocean. That is, when putting windmills on land there are only so many places where there is enough wind in a year to make it worthwhile, mostly on ridge tops. The ocean has plenty of wind once you travel a few tens of kilometres out, but then the windmills must survive a constant barrage of salt water spray and the pounding of the sea. We can't really build machinery able to survive this without frequent maintenance, which raises the operational costs. Then once you've generated the electricity you need underwater electrical lines to move that power to the shore. These lines also suffer from salt water, thus being costly to install, and have significant power losses simply due to distance. This is in addition to the energy storage overheads. Ocean wind ends up with an EROEI which is marginal. It could be barely acceptable, it could not be, it's too close to call at a time when we don't depend on it and can subsidize it using other energy sources. Land based wind is useful where where it exists, but it's pretty limited in scope.

Tidal power is too limited and has a terrible EROEI.

Solar-electric is a pretty good deal by itself. Unfortunately storage losses push its EROEI too low.

Solar-thermal is probably the best of the renewable options, except that it requires large amounts of space and only works in hot geographic areas. You are unlikely to find much success with a solar-thermal plant during the winter in northern Europe.

Biomass and biofuels have an acceptable EROEI as long as you don't have to transport it far. Whenever you have to move them more than a couple dozen kilometres the EROEI becomes unacceptably low.

Geothermal only has an acceptable EROEI in certain geographies. In much of the world it just isn't worth it.

So that covers the major alternative energy sources. The renewables could be made to work with lifestyle changes to minimize how far the energy would need to be transported and how much energy would need to be time shifted, such as being stored from day to night. The nuclear options look alright for the most part, but I'll discuss the major problem in our specific situation next. For obvious reasons of climate change and depletion fossil fuels don't really have a long term future. Fossil fuels are also approaching or recently passed their peak extraction rates on a net energy basis which makes them unsuitable in the medium term.

Now let us talk about energy transitions. An energy transition is when a society moves from one predominant energy source, wood, to another predominant energy source, coal. This has happened a few times over the millennia. Examples include moving from human power to animal power, animal power to biomass (eg. wood), wood to coal and coal to oil. I won't go into too much depth, but there are two important points to glean from the history of energy transitions.

The first is that no matter the economic circumstances it takes no less than twenty or thirty years to complete the transition. Even then a significant amount of energy will still be delivered from the old energy source for decades to come. This time lag is due to several factors. The most obvious factor is that it is rarely economic to throw existing infrastructure and machinery away. If you have an expensive steam engine running your factory which is ten years old it is uneconomic to throw that perfectly good engine away so shortly into its usable lifetime. Instead the rational factory owner will continue to use the coal steamengine for decades to come. The costs of the engine have already been payed and suited the purpose as well now that oil is on the scene as before. It takes decades for this old equipment to wear out and need replacing. As such equipment is slowly replaced it will tend to be replaced by equipment using the new fuel.

When an older fuel is being supplanted by a newer fuel, uses for which the older fuel was only marginally sufficient, that is it was only good enough because it was all there was, are quickly replaced. So you quickly see trains stop using coal when oil becomes available, for example. This reduction in demand lowers the price of the older fuel, making it economic in more uses than previously and therefore demand increases. That same factory owner, when the steam engine is up for replacement, will take a hard look at the relative costs of coal and oil to power his factory. If the price of coal is down because trains have switched to oil and coal stoves are on the way out, then it will make sense to buy a new coal steam engine to replace the last, even though in slightly different circumstances an oil engine would be a good choice.

In this way an energy transition is dragged out for decades until all the old equipment has worn out and been replaced, possibly a handful of times as the older fuel is slowly replaced in most uses. The old fuels never truly go away, but through the transition are kept in niche applications where they have some cost or convenience advantage. Many people around the world still heat with wood, as one example, because they are able to harvest it locally and at low cost. Coal is still widely used to generate electricity for the same reason.

The second important point to understand from the history of energy transitions is that, historically, all energy transitions have been from a lesser energy source to a greater one. That is, going from wood to coal there are no situations where wood is better to have than coal. Similarly there are no situations where oil is worse to use than coal. The only advantage older fuels can have are cost due to drastically decreased demand. This point is critical to understand because it makes it painfully clear that we are in a historically unprecedented situation. Never before in the history of humanity have we been forced, through scarcity, to transition from a superior fuel to an inferior one.

All the alternatives to oil as a source of energy are undoubtedly inferior. Even the other fossil fuels are not as good as oil. Coal has the difficultly of being solid and needing large engines to consume. Natural gas can use the small and flexible internal combustion engines we are all used to, but it is expensive to transport and transfer between containers. All the other alternatives tend to generate only electricity and while electricity is quite convenient for most uses, it is expensive and difficult to store in a portable vehicle in sufficient quantities such that no recharging would be necessary for an entire work day.

The fact that all the alternatives are less convenient and require more costly infrastructure will make this energy transition especially difficult. All the past transitions have been voluntary and thus easily happened in the best possible time. It was soon convenient for every possible use to have an oil driven variant because there were no downsides to doing so. This energy transition, however, will be mostly involuntary. Any application which switches will face inconvenience and additional costs at every step. Given this you should expect that we will continue to use oil until we have no other choice. At the point which we have no choice we might not have the wealth and resources remaining to start investing in alternatives. At that point the energy available to invest will be decreasing where it was increasing during previous transitions.

More generally the necessity to invest energy into building the next energy source when society is already suffering from a shortage of energy is called the Energy Trap. This is covered in more depth on Do-The-Math, but I'll lay the basics out.

Consider a world where the economy isn't doing all that great and energy prices seem awful high. This economy is more or less at the limits of the energy it can affordable get out. This economy is experiencing a zero sum energy state. That is, they can't easily produce more energy at a whim. Instead long term investment is necessary and every gallon of gas which goes into building the new energy infrastructure must come out of the gas tank of some person. Any food to feed the workers must come from the plate of somebody not constructing that infrastructure.

At this point there are really two things which could be done. The basic case is to continue along as per normal and do nothing special. As the fossil fuel continue to deplete the economy continues to be bad and gets slightly worse. This continues until things get really desperate, but by that point the economy is nowhere near strong enough to do anything. There isn't enough energy left to pay for any new infrastructure.

On the other other hand, it is possible that the civilization could choose to start investing in the necessary infrastructure now rather than later. This takes a strong will because while this investment is happening everybody needs to suffer more than they otherwise would. Everybody needs to sacrifice in order to build a better future. What form this infrastructure takes doesn't really matter except insofar as the time lag between beginning to invest and that investment starting to produce power. The longer it takes the longer and the deeper the suffering is required. If it takes ten years to build a powerplant of some particular type, whether it's nuclear or solar thermal or a wind farm or a tree farm, then for those ten years everybody must make due with less to ensure construction gets the real resources, food or fuel or concrete, necessary to build it.

Obviously energy sources we already know how to build are faster to build and will thus result in less overall suffering. In this vein proposed energy sources which haven't been demonstrated in the real world at industrial scale present real problems to the timeline. If ten or fifteen years of research and development are necessary to scale some energy source up to industrial scale and then five years to build a plant then that's twenty years of suffering during which time the energy situation has worsened, requiring more plants to be built. But since energy is tight only so many plants can be under construction at one time.

It is for this reason that technology won't save us, because we don't have time to wait for the technology to be developed and deployed. We don't have time to wait for it to mature. The technologies we have proven and scaled are not up to the task at hand and the technologies which might be up to the task, such as thorium nuclear, is not mature enough to be ready in time. This is compounded by the fact that the higher the technology the more energy intensive it tends to be. Some technologies may end up worth the costs, but until a technology is mature we cannot be sure.

Given the depletion rates we are seeing in world fossil fuel production and an EROEI basis we should be rolling out the energy source and infrastructure of the future today, consequently we don't have time to wait for new technology to come over the horizon, we need to make do with what we know how to do today. Even though what we know how to build isn't cost competitive and involves more suffering than sticking with the status quo.

What You Could Do

So far I've not been a bringer of good news. I've described why I believe the future of the world to be grim and why our current civilizations won't fix the problems, even though they could. In summary things aren't looking so great for business as usual. The good news is that I've laid the groundwork and can start getting onto the more positive business of describing what an individual can do and eventually finish my introduction and start describing what I want to do myself.

Just because civilization at large is unlikely to turn itself around enough to solve the coming problems does not imply that an individual is powerless. There are several broad actions a person can take to both protect themselves from the downsides of the future I envision and at the same time move the immensity of civilization towards the correct path ever so slightly. Get enough individuals doing their part and it's even possible that enough momentum could be gained to help civilization as a whole.

The three roots of the problems I described apply equally at every level, from the largest aggregations of civilizations down to the actions of a single person. Consequently, the general types of solutions to diminishing returns, unsustainable practices and peak net energy are applicable at all levels as well. The specific implementations will differ, but the general concepts do not.

The biggest of the three core issues is peak net energy. As I've stated before, if you have ample net energy available you can do anything, even if you must do it in a wasteful way. Diminishing returns and unsustainable practices are real issues, but the historically increasing supply of net energy has covered over their effects. When net energy ceases to increase individuals and countries and entire civilizations became unable to ignore those two issues.

Therefore the best way to protect yourself as an individual is to reduce your energy use now. Beating the rush will provide you time to try solutions and fail with the knowledge that you can always fall back on your previous, more energy intensive, methods while you work out the kinks. Reducing your energy use is not restricted to using efficient lightbulbs and turning the heat down. Though those are easy and useful methods they are only the tip of the iceberg. Consider how you know you are using less energy than before? Do you setup a whole house energy monitoring system, measure your use throughout the year, make some small change and then wait another year to see if you used fewer KWh of electricity and fewer litres of natural gas? It's not always clear that one change, turning down the heat for example, will really reduce your overall energy use. It's possible, for example, that if you turn down the heat you drink a lot more tea or have longer, hotter showers which results in your energy use actually going up.

You don't run these year long carefully controlled experiments. Instead you make some change and check the bills at the end of the month to see if you saved money compared with last year or not. The reason you do this is because money is a proxy for energy. A fixed amount of money tends to be able to buy, adjusted some for convenience of the form of energy and the final form it will be used in, about the same amount of energy. As an example, consider the cost of heating with electric heaters versus a gas fed radiator system. The electrical heating is going to be significantly more expensive, but only because it is a more convenient form and is poorly suited to the use. In order to generate that electricity most likely some other fuel was burned, at some efficiency less than 100%, that heat turned into kinetic energy, at some efficiency less than 100%, that kinetic energy turned into electricity, at some efficiency less than 100%, that electricity transfered to your home, at some efficiency less than 100%, and finally turned into heat. You pay for the source energy, even if you use lose most of it along the way to using it. The gas boiler, on the other hand, loses much less of it's energy because there are fewer conversions. The gas is pumped into a pipe, using some amount of energy, arrives at your home and is burnt, at some efficiency less than 100%. If we assume a 10% loss for each conversion you can easily see why electricity is much more expensive to heat with than gas. You pay for the same amount of input energy, but get much less usable energy.

However, if you look at a different use the math can work out differently. Natural gas cannot power a computer directly so you need to do all the conversions the power company does at home, but since you are doing it at a smaller scale your conversions will be less efficient and so powering a computer using natural gas at your home will cost more than using electricity from a power company.

If money is a proxy for energy and you check your utility bills to confirm that you are truly using less energy then if it is easy to see how you can confirm that your utility bills are probably the smallest part of your energy use. As a portion of all your expenses utilities almost certainly do not account for 50%. Instead most of your money goes elsewhere, into housing and insurance and food and products. A general rule of thumb to determine how much energy went into an item is to look at its wholesale cost and divide that by the recent cost of a barrel of oil. The approximate amount of energy which went into that item will be equivalent to how much oil the wholesale price could buy. If you want to put a significant dent in your energy use you have to spend less money. The less money you spend the less energy you are consuming.

At this point it is critical that you understand the subtle distinction between frugal and cheap. The difference is primarily based on time horizon and impulse control. A cheap person find things with the lowest possible prices and buys those. They will often put up with inferior and worn out items in their life year on year. These are the sorts of people who shop at discount stores selling only the cheapest and most poorly made merchandise. Though cheap people avoid spending more than absolutely necessary on any particular item, they always seem to be shopping for something, either to replace something which is broken or because they want something right now.

Frugal people, on the other hand, searches for things which have the lowest cost over a long period of time. Where a cheap person might buy a $10 toaster because it is on sale, a frugal person buys for quality, durability and lifespan. That is, a cheap person will tend to buy a $10 toaster every year or two while a frugal person will buy a $50 toaster once a decade or two. A cheap person looks at the cost right now, a frugal person looks at the cost over ten or twenty years. Consequently you don't see frugal people buying near as much. Firstly they have to replace things much less frequently and so are tempted by store displays less simply because they aren't in stores as often. Secondly impulse purchases are often avoided because their estimated per year cost is twice or more that of most other purchases the frugal make. That trendy $60 kitchen gadget which will be forgotten in six months doesn't compare well against a $500 food processor which will last thirty years.

Similar logic applies to reducing your net energy use by reducing how much money you spend. How much money you spend today is much less important than how much money you spend over the next thirty years. As the amount of energy embedded in an object can be estimated by dividing the cost of that object by the price of oil at the time of its manufacture the best way to reduce how much energy you use over the long term is simply to replace items less often. Thus the best way to reduce your net energy use over the long term is to buy better quality items once instead of cheap items every year. Be frugal, not cheap. Best of all, doing so does the most of your net energy use while likely increasing your standard of living.

A knock-on effect of spending less money towards the goal reducing your net energy use is that it helps keep you away from unsustainable practices. Most obvious of this is spending less money keeps you out of debt. Since debt is the primary unsustainable financial process this is a good thing. Keeping items longer, where appropriate, also defends against the unsustainable bigger is better trend in consumer products. Year after year everything you buy at a store, other than electronics and food, seems to be getting bigger. Bigger furniture, bigger appliances, bigger housing, bigger vehicles and bigger storage to manage it all. Eventually this trend will reverse, but until then buying early is the only real way to avoid this growth. The most insidious part of this growth is that as everything gets bigger it costs ever more to keep and use them. For example, as large kitchen appliances (oven, range, refrigerator, etc) grow they take up more room in your kitchen. This reduces the amount of counter space you have available. This is a serious problem if at the same time the size of your small kitchen appliances (toaster, blender, food processor, etc) are growing as well. The net result of larger appliances, smaller counters and more specialized appliances is that you no longer have room to keep all your commonly used appliances on the counter so you have to swap them around. This moving into and out of cabinets just adds unnecessary frustration and work to you life. Once you've bought an appliance it doesn't grow behind your back. The longer appliances last the less you experience kitchen shrinkage.

On the individual level avoiding unsustainable practices amounts mostly to avoiding ever increasing anything. This might mean ever increasing debt or ever larger appliances or ever more clothes or just ever more garbage. Avoid the exponential function and you will be doing well.

This is, from a high level, simple and easy. At least, it's simple in theory and easy once you are on track. Getting onto this track can be difficult. If you are at all short on money it can be difficult to tell your friends you can't go out this week because you are saving for a new coffee table and they suggest just going to Ikea, spending thirty bucks and calling it a day. If your toaster is broken it can be a hard thing to live without a toaster for a few weeks while you save and search for a good one. However, as you incrementally replace your things with longer lasting versions (you don't have to start out with the everlasting toaster right away, it's quite acceptable to work your way up to it by going through a better toaster first) you'll find yourself short of money less and less since you'll be rushing to replace suddenly broken items less often.

Diminishing returns is a significantly more difficult and at the same time significantly more rewarding set of problems to solve. As described previously there are essentially two possible responses to diminishing returns: accept the problem as too costly to solve or modify the structure of your life to eliminate the conditions which lead to the problems in the first case. Taking the first tactic is simple enough, you do some thinking, run the numbers and end up deciding that living with the situation is the best thing to do. For example, assume you live in a city and work a nine to five white collar job. You live in an inexpensive but nice apartment nearer to the outskirts than the downtown core. You commute by public transit every day and it takes you forty-five minutes each way. Once you get home you have dinner and then drive to the gym for an hour long workout. When you get home from that you find yourself with no time and wish you had more free time. You would drive to work but don't think you can afford the extra parking and fuel. Living with the problem is just accepting that you can't do any better. Trying to improve any single dimension (time, money, housing, fitness) just degrades some other dimensions. You can't reduce your commute without spending more money and you can't get more free time without getting out of shape.

On the other hand you could step back and take a hard look at what you truly want and which things you think you want but which are only implied by the standard set of solutions. What you really want is a place to live, ample free time, sufficient money and being physically fit. The standard solutions to this are live far from work, take transit to work and go to the gym after you leave work. However the standard solutions aren't the only ones. Consider the common choice to sell your car and move close enough to work to walk. Say your commute time doesn't change, it still takes you forty-five minutes to walk each way to work and further that your apartment costs 25% more. Even with these reasonable assumptions you can still come out ahead. You pay more for your housing, but none for your automobile and transit pass to work. What used to be dead time sitting on a bus or train is now time you are exercising so after work you don't have to do it. Not only that, but you are exercising more than you were before.

Moving further into the city and giving up your car is a bit of an extreme example of lifestyle choices one could make to simplify and improve their life. A less extreme example would be one of heat. Above the freezing point of water, most people don't truly care how warm their house is. All they care about is that they, themselves, are warm and that they don't touch anything which makes them cold. The current standard answer to this problem is to heat a house and everything in it to a cozy 22 degrees Celsius. Obviously this takes quite a bit of energy. Since energy is expensive the suggestion is to turn your heat down. A more radical suggestion would be to turn your heat way down and apply more focused heating devices. A low voltage electric mattress pad, for example, makes it quite comfortable to sleep in a bedroom with a temperature near 13 degrees. Similarly electric blankets and a sweater make couches comfortable in similarly cold weather. Once those large spaces are taken care of it is no longer necessary to heat the full living area to a full 22 degrees. Instead only focused areas need heating such as bathrooms, beds, seats and the plates you will be eating from.

In many cases this kind of systematic simplification is possible. Make no mistake, it is often not the easiest path, but often gives the greatest rewards. Eliminating the car from your life gave you about an hour and a half back a day while saving you money at the same time. More focused heat means carefully considering the situations where you want to be warm and finding appropriate solutions instead of depending on just heating everything. But once you are using focused heat there is no reason you need stop at only 22 degrees, or whatever money conscious temperature you choose. If you are only heating a small space why not have the bathroom at 30 degrees for your shower? Or give up those heavy winter pyjamas and turn the bed temperature up to 11. There isn't a need to be kinda warm and suffer from drafts if you are clothed and equipped to be comfortable in the teen temperatures.

As an individual there are things you can do to prepare yourself for the grim future and thus avoid the worst of its effects. Taking personal actions to better your life in these ways are not only beneficial to yourself, but by being an example you will discover and spread the ideas which will be necessary in the future for all the people of your region. All it takes is spending less, thinking more and understanding what you truly want.

What Could Be Done

In the previous post I discussed the various costs of additional complexity and described how civilizations choose additional complexity nine times out of ten to any alternative when it comes to solving any particular problem. Obviously when the resources available to civilization are shrinking with every passing year, a situation which will come to pass in the next few years if it hasn't already, additional complexity is no longer affordable in most cases. Eventually not only new complexity will be unaffordable but existing complexity will be as well. At that point civilizations as currently structured will start running into serious issues.

All is not hopeless however, there are things which can be done. There are alternatives to complexity which cost less and are nonetheless effective. In this post I'll describe some of the alternatives rarely taken.

The most obvious alternative, when it comes to an issue where complexity is an expensive solution, is simply to live with the problem. That is, don't try to fix it. If a careful cost-benefit analysis shows that preventing a million dollars a year worth of fraud and petty theft within a company will cost ten million dollars a year it might just be better to live with the loss. At some point civilizations will have to grow up and realize they are not fledgling deities on their way to becoming omnipotent. Civilizations are institutions made up of mortal beings which have finite resources. A civilization can accomplish many things, but it cannot accomplish everything. A civilization could choose to live with a problem instead of spending immense amounts of money fixing minor annoyances.

As far as I can tell, they only reason this doesn't happen more often is because most issues are solvable and it is assumed that if something is solvable it is necessary to solve it with no consideration for the relative costs of solving versus not solving the issue. Essentially it seems that we try to solve every problem because those in power and any voters who put them there are juvenile and self centred. Juvenile because they believe that every problem can and should be solved, self centred because they assume any issue affecting them is significantly important that the cost to solving it is insignificant compared to the gains. When I walk to the train station I have to cross a busy street and often have to wait a few minutes for traffic to be stopped for me. This is certainly an annoyance and I would prefer that traffic be routed to not inconvenience me, yet I understand that the cost to society as a whole outweighs my personal gain in this case. Special interest groups make no such accommodation.

The second way civilizations could solve problems if through efficiency reform. When you think of efficiency the thought of better gas mileage and LED lights probably comes to mind. While these are certainly efficiency improvements, they are piecemeal improvements, not systematic improvements. Piecemeal improvements, going from incandescent light bulbs to CFLs to LEDs, is doing the same thing in pretty much the same way but doing it more efficiently. This type of efficiency is limited in the effect it can have. An LED may be five time as efficient, it uses 80% less power, as the incandescent bulb, but it can't ever use 200% less power, which would mean the lightbulb supplied power along with light. Piecemeal efficiency is certainly important, but if at most you can be 100% efficient and civilization has been improving efficiency for centuries, then you might suppose that the average efficiency is 30%. That might sound low, but if only leaves a factor of three improvement to be had at the most.

Piecemeal efficiency also doesn't change the system in any way. It merely does the same things at slightly lower cost. This avoids any chance at reform to solve the root of the problems. For example, consider the problems of traffic, air pollution and social distance, not knowing you neighbours, in the developed world. The standard piecemeal solution would be to improve the pollution controls on cars, add more freeway and promote social programmes which people don't have time to take part in anyways. All this would improve the mentioned problems by some percentage but at great cost. What could be done instead is to take a systematic view and suggest a difference change. Traffic and air pollution is proportional, in part, to the number of kilometres driven. How well you know your neighbours is proportional, in part, to how much time you spend near your home, which is usually whatever time you have left after commuting and working. Instead of making cars less fuel efficient in order to reduce pollution and expanding freeways to reduce traffic and spending money on social programmes to increase neighbourhood cohesion, why not reduce how far you have to travel?

If some reforms were put in place to move work and shopping closer to where people live the number of kilometres people needed to drive could be reduced to 25% or less of the current value. With so many fewer kilometres to drive there will be less traffic and no need for more highways. Fewer kilometres means less pollution and more time near home to get to know your neighbours. Automobile pollution can only be improved so much at the car at significant cost, but moving businesses closer to housing saves money and gives much larger reductions.

This is systematic efficiency improvement, changing the system in order to increase the efficiency of it as a whole, even if some individual parts become less efficient. Systematic efficiency isn't limited by mathematics to using at most 100% less than the least efficient alternative. Systematic efficiency can be many times more efficient by focusing on the outcomes we desire and being willing to eliminate wasteful societal accidents, such as long commutes, in order to achieve those goals. Nobody wakes up in the morning with the life goal to sit in a car which is expensive to own for two hours a day in traffic in order to go to a mediocre job and arrive home with neither time nor energy to see their family and friends or to get a full night's sleep. If nobody wants it why do we work so hard to perpetuate it?

Systematic efficiency reforms lead us naturally to the third major alternative method civilizations could use to solve problems: simplification. We saw that systematic efficiency reforms can drastically improve efficiency and solve problems by reducing those problems at a higher level. That is, instead of making cars more efficient (a low level solution) we can organize our cities to needs cars less (a high level solution). In many ways these improvements require drastically reducing some problematic element of modern civilization through reorganization, which could be thought of as a less powerful form of simplification.

Consider again the traffic and pollution example. While reducing commuting distance helped, motorized commuting is still a significant part of that world with its significant costs. Even if you don't drive as far to work you still need to own and maintain a car in most cases. What if we took the systematic reforms further? What if we redesigned cities such that it was possible to walk to work in pretty much all cases? Such a city would have little need to personal automobiles or public transit at all. Suppose that small commercial hubs were surrounded by a ring of residential space. If the major businesses of the area are all within walking distance of their workers then many of the customers of those businesses are consequently within walking distance. That is, if the office area is within walking distance then there must be restaurants within walking distance of the tower, whose workers can walk to work because out city is built that way. Restaurants need suppliers and repairmen so there will be a grocery store and some appliance shops in that commercial area as well. This continues until all the daily needs of modern life are available within walking distance of a person's home. Shops and services which are needed less often may not be available within any particular hub, but would be contained within some hub. Since trips to those other hubs would be less frequent, most of your friends would be in your hub because you spend most of your time there, you probably wouldn't own a car to travel there. Inter-hub travel would either happen via public transit or hired car.

Eliminating mechanized daily commuting simplified the society drastically with a net quality of life increase. No longer does everybody have to know how to drive, own and maintain their own vehicle and find someplace to park them. All those costs disappear. Cars become something you rent when you need them, like carpet cleaners, because you don't need them terribly often.

Simplification is a powerful tool to solve problems civilizations have by removing the causes of problems instead of merely papering over the symptoms. It takes careful consideration and the courage to stop throwing good money after bad, but it can be done. In the future world where resources are becoming ever scarcer and the standard method of adding complexity to solve problems has passed the point of diminishing or even negative returns these alternative strategies must be employed. When you can no longer afford to do what you've done in the past you must do something different.

How Civilizations Solve Problems

Imagine, if you will, that you wake up one morning to the breaking news of allegations that a medium sized corporation has been caught in the middle of some chronic misdeed. Perhaps it's fraud, perhaps corruption, perhaps it's collusion with competitors or environmental degradation. Whatever it is tens of thousands are affected. What do you expect to happen?

Certainly there will be investigations and court cases and jail time. But there is also likely to be outrage and politicians saying "Never again". And if the politicians start saying that what do you expect to happen?

Would you expect a calm and reasoned debate about preventative measures and the cost to society of those measures compared with the cost to society of a repeat of the misdeed? Would you have a reasonable expectation that this debate would result in a decision to make no changes, create no new regulation and give no new powers of reporting and oversight? If so please tell me which country that is, I just might want to move there.

A significant misdeed making the news almost never results with the conclusion that, in balance, the costs of preventing similar misdeed from happening again is not worth the costs of the extra regulation necessary to prevent it. Instead, almost irrespective of the issue, you'll get a bunch of new laws or new regulations or sometimes even new government departments intended to prevent a repeat offence. In short, you would expect the civilization to add additional complexity in order to fix the problem such that it never happens again. At least, not in quite the same fashion.

Now consider how often you hear about a series of laws being entirely revoked in your home country. Not replaced, not combined with some other set of laws, but revoked. One day chewing gum while flying a plane is illegal and the next you can chew gum wherever and whenever you please. Entire regulations disappearing, entire government departments being shuttered.

Weigh those occurrences against the introduction of a new law, increased regulation or a new government department of gum chewer certification. Which happens more often in your experience?

Almost certainly you experience reduced complexity in law, government and business quite rarely indeed, especially when compared to the signs of increasing complexity. This is because civilizations tend to solve problems by adding rules, regulations, overseers, certifications and procedures. Don't forget all the specialist jobs created to facilitate the new procedures and push the new mountains of paper around and to audit that all the additional complexity is being done properly and train those who do the actual work how to conform to the ever changing regulatory landscape. In short, every problem ever seen has a special rule or procedure put in place to prevent it from occurring again. Every problem is solved by adding complexity and specialization.

In the beginning there was little specialization among humans. All young men did pretty much the same thing as other young men, all young women did pretty much the same thing as other young women. Any specialization which happened was dictated more by your age, health and sex than anything else. This is not to say that everybody was equally good at everything. Certainly some young men would end up being better hunters than the others, some women better at teaching their children. Following the standard economic model it would obviously have been better for the best hunters to hunt more to allow the best builders to hunt less and build more. Eventually this is exactly what happened. Instead of every male having to perform all the tasks of civilization they started to get trades. Many were farmers and farming is all they did. As farming was all they did they were more efficient at it both because they got more experience but also because they could do thing with less waste as they had economies of scale on their side.

Specialization is a form of complexity. If you have specialized farmers they still need cloth so you have specialized weavers. They make different things so they need some system to trade with each other and that meant a market place and money. All that is overhead beyond the farming and weaving, but it costs relatively little compared with the societal benefit of more productive farms and better weavers. This is all good complexity.

However, like nearly everything else in human experience complexity suffers from diminishing returns. Additional complexity costs more and adds less value in a very complex system compared to in a simple system. If you already have lots of rules and roles adding additional rules and roles takes a lot of work to fit them within the existing system without conflicting and creating new problems. When new problems inevitably arrive the only way to solve it is with even more complex rules of interaction and job roles to enforce or facilitate those new rules.

Eventually the additional complexity becomes so onerous that it costs more to implement and maintain than the original misdeed did in the first place. To understand this consider the costs to society of all the governmental and corporate bureaucracy across the world which exists to prevent corruption of various forms. All those rules and regulations and forms and accountants and auditors and middlemen put in place to ensure that when you order a computer for work that you aren't giving the contract to your neighbour or overspending or pocketing the money. Think about how much money it costs the entire economy to pay all those people to push paper day in and day out. Now consider how much the possible fraud if you didn't have those systems in place could be worth. A corporation will spend a hundred thousands dollars to ensure that a half million dollar purchase order is all above board.

There is no consideration when the rules are put in place or applied as to whether the costs of this compliance is proportional to the risk. Often it costs more than the feasible malfeasance could be itself, but you can be sure that next year will bring some new regulation or procedure to prevent a type of fraud first exposed the year previous.

Never, it should be noted, will the annual change remove the regulations. Why is pretty easy to understand, few with full knowledge of the consequences of their actions and of their own free will would vote themselves out of a career. So the complexity just continues to mount, special cases on top of special cases. Past the point of diminishing returns, often past the point of zero returns and straight into the area of negative returns. Civilizations solve problems by adding complexity. Never voluntarily removing it, only ever more

The more complex a system the more fragile it is and the more energy required for its maintenance. The legal framework of most developed countries tend to require an expense collection of judges, clerks, lawyers, accountants, specialists and bureaucrats to keep running. All these people need to be fed and clothed and housed and payed. All of this requires energy which is spent solely to maintain the system, whether the outcomes of their work is useful for society or not.

In a world where complexity is already well past the point of diminishing returns and the absolute amount of energy available to the civilization is decreasing with every passing year, it is ever more difficult to afford the existing complexity of a civilization, let alone add ever more complexity to solve the current crop of pressing issues. People will fight tooth and nail to keep their little comfortable piece of the existing complexity. Any civilization stuck between complexity which is too costly to keep and politically impossible simplification has its very existence threatened. And a world where the existing civilizations have been thrown into chaos is a very grim place indeed.

The Future is Grim

In the past, the somewhat distant past at this point, I've made Freenet only posts talking about the nature of energy in the modern world and where it comes from. The original goal of doing this was to have a continuing and complete series explaining my views of the future and the basis for those views, starting at the bottom.

It was a reasonable plan, except that it's boring and until I complete the posts it would be difficult for those who don't already believe as I do to understand where I am going. It also has the difficulty that I would have to retread territory well worn and better said by others on the open web. It should come as no surprise that I put off posting on that topic until I dropped it entirely.

So new plan, I'll just describe the future I see, explain in summary why I see it that way and go from there. Doing so I'll lay the groundwork for further discussion and research if anybody cares. I'll also be able to move onto the more interesting, from my perspective, discussion about what I want to do about it. I've read lots about why things are probably going to turn out the way I think it will, but I've not written much about my current thinking on what I should do about it now.

Thusly, let me explain why the future I see is, from the perspective of today, pretty grim. It all comes down to diminishing returns, unsustainable practices and peak net energy.

As you probably know diminishing returns is the generally applicable fact that when you have a small number of things, such as apples or cars or sunny days, then adding one more unit of that thing improves you life a lot. If you haven't seen the sun for a month then a sunny day is pretty amazing. However, if you have an ample supply of things, then adding one more of that thing improves your life a lot less than it used to. If you go from having no car to having one car then your life can get significantly better. If you already have five cars then getting a sixth doesn't really add much to your life. This is the pretty standard explanation of diminishing returns. However, this same theory goes further. At some point adding one more of something gives you no advantage. If you have a hundred apples you aren't going to be able to use all of them before they go bad, so adding one more apple offers no improvement to your life. This is the transition point from diminishing returns to zero marginal benefit, the point when adding one more unit of a thing neither improves nor worsens your life. Past even this point there is a point of negative returns. When you have had three hundred sunny, rainless days in a row you are in a drought. Every additional sunny day makes your life worse off and you would really desire a few rainy days. Past a point adding more of something makes your life worse. This could be because it displaces something you need, rain, or because it costs you to handle the excess, having thirty tons of apples, or it could be that the maintenance costs outweigh the benefits, as in having twenty cars and having to pay insurance on all of them.

When I look at the world around me I see many instances where it seems that we have hit diminishing returns and are suffering for it. I even see some instances where we have hit negative returns but nobody seems to notice. For example, cars are a useful thing. The more cars you have the more you can use them and the more you use them the more you can do. That is, until you get stuck in traffic because everybody is using cars to purposes with diminishing returns. There isn't a reason that most office workers need to commute to the office each day by their own single occupancy automobile. Yet they do, five days a week.

Another pertinent example is human density and centralization. When cities were first being built they were useful concentrations of specialized skills and inventory. Below a certain density a niche specialty shop can't be sustained because customers must travel too far to be worth the trip. Similar reasoning can be applied to industry centres. There are good reasons why similar industries tend to cluster into small geographic areas. If you have a few companies doing something you end up with their suppliers moving nearby which makes opening other similar operations easier. At a certain point there is little point in setting up anywhere else, all you gain is greater distances and lesser selection of suppliers and a smaller skilled labour pool. Over time this led to the high rises full of office workers pushing paper around. But at some point the advantages of having a thousand metal workers working within a single square kilometre becomes shipping a couple hundred thousand office workers into the downtown core from the suburbs every day. The advantages of centralization of similar industries reaches diminishing returns and suffers from grid lock rush hour traffic for hours everyday along with sky high office rents.

There's has been a lot of ink spent in recent years on the supposition that there are things we can do to convert unsustainable practices, clear cutting forests, with sustainable practices, selective logging with rotated replanting. This is not what I mean when I say unsustainable practices. Instead I mean many of the processes and assumptions the very core of our economy is based upon. I mean anything which has, at its core, an exponential function. The reason is simple, the Earth is finite and the exponential function is infinite. Eventually these two realities collide and one must give out. The Earth isn't going to magically become infinite and so anything whose operation assumes something exponential can happen is going to have to change drastically.

Again, if one looks around you can see the exponential function in many places in the modern world and many systems based upon the assumption that these things will continue to increase exponentially. For example, look at population. There is a great worry at this time that China will not have enough workers to support it's aging population in a few decades. This is blamed on the One Child policy which broke the historical assumption that population always increases at some rate, say 1% per year. Anything consistently increasing by a percentage most years is exponential. When population stops increasing exponentially there becomes problems of having more old, retired people than young workers to keep the economy working.

Though I would like to avoid this cliché example, one cannot really skip over the exponential assumptions built into debt and economic growth. It really is one of the core problems facing the world today. As I described previously anything which increases as a percentage of itself on a regular basis is exponential. Therefore anything with a percentage rate is exponential. What is the most commonly discussed exponential rate? Interest rates.

Interest rates are used in two capacities which are strongly related. The first is in relation to debt. When you take out a loan you agree to pay back the amount you borrowed and pay a percentage of what you still owe every year for the privilege to borrow the money. Thus you owe more back than you borrowed. There are several reasons this transaction makes sense. You want the money now and the investor lending to you wants to make money on their money. The second most common use of interest rates is investing money. That is, you invest a certain amount of money and expect a certain interest rate return for doing so. You expect to get more money back than you put in. The modern world has many abstracting layers which obfuscate the situation, but the essential reason you can invest and expect interest back is because the economy is growing and the economy grows because there are ever more people buying ever more stuff so it's possible for a business to borrow money today to buy equipment with the expectation of selling more in the near future in order to pay back the exponentially increasing cost of their loan.

Everything works fine as long as all the important bits continue increasing exponentially. Whenever this exponential increase falters for a bit we call it a recession, but even during recessions most of the factors continue increasing exponentially (babies don't stop being born the day a recession begins) and so the entire economy continues growing exponentially and the loans can be paid back.

The question one should ask themselves is what happens when most of the factors which are assumed to grow exponentially stop growing exponentially forever because they've reached the finite limits of reality? If a small blip causes a recession, what would a permanent end to growth of the core components of the economy cause?

Finally we have peak net energy. Energy is a wonderful thing. With sufficient usable energy you can do just about anything. Want to reduce the amount of carbon in the atmosphere, apply gobs of energy and we could start today. Want to travel to the moon, gobs of energy. Faster cars, better houses, shorter commutes, cure diseases, solve poverty. It's all relatively easy if you had unlimited cheap energy. Unfortunately we don't live in a world with unlimited cheap energy. Instead we live in the real world. A world where it takes energy to get energy.

The best example of this is firewood. The energy inside firewood comes from the sun and exists in the wood itself, even when the tree is standing. The energy is there. However, you have to cut down the tree to make logs. Then you need to buck the logs to make rounds. Then you have to chop the rounds to make firewood. Then you have to move all that wood from the middle of the forest to your home. Now you can burn the firewood at this point, but it isn't ideal because it has too much water in it and won't burn well. Without drying the wood you won't get the maximum usable energy out of it because you'll spend a significant portion of the energy in the wood turning that excess water into steam. So after you've gotten it home you have to leave the wood outside to dry for a year.

Every step in this process requires energy. It takes energy to run the chain saw. It takes energy to power the buck saw. It takes energy to swing the axe. It takes energy to move the wood. It even takes energy to dry the wood, though that tends to be 'free' from the sun. So you have to spend energy to get energy. The net energy you get out of the entire process is the amount of energy from burning the wood, heat, minus all the energy you put into the process to go from trees to burnable firewood.

Well, that's the simplified net energy calculation suitable at the small scale. At the industrial scale you also have to account for the energy which went into manufacturing the tools you used. Tools and machinery wear out so much as you have to account for depreciation of assets in accounting you have to amortize the energy embedded in the tools across all the energy you get from burning the wood. So you have to account for how much energy it takes to build a chainsaw, how much energy it takes to form and sharpen the buck saw, how much energy it takes to forge the axe head and harvest the handle. How much energy it takes to produce the truck which carried the wood and how much fuel went into building the roads the truck drove on. You even have to account for how much energy went into building your wood stove at home.

All that can add quite a bit to the required input energy. Also the entire process is recursive. The trucks which transported the wood drove on roads built with bulldozers, so the fair share of the embedded energy of that bulldozer must also be accounted against the output energy from burning the wood. It gets complex quickly to get an accurate measurement of the net energy, but you can usually get a rough estimate fairly easily.

The most important thing to remember about energy is that it is net energy which enables us to do things. The energy spent getting the energy is inaccessible to society. This concept is known as Energy Return on Energy Invested.

Now humans are lazy. Given the choice we'll go for the easy to get stuff before we do the hard things. Over the last few hundred years we've been mining coal and natural gas and oil, starting from the stuff you could get with a shovel and bucket and working out way to the more difficult sources. The more difficult something is the more energy and machinery is needed. The more energy needed the lower the net energy of the resulting barrel of oil. That is, a barrel of oil fixed amount of energy so if it takes more energy to get that barrel of oil then the EROEI is reduced, that particular barrel has a lower net energy. Since energy costs money the lower the net energy of any particular source the more expensive it is, even though you are getting the same absolute amount of energy out of each barrel.

Civilization and all the wonderful complexity within it which lets most of the developed world work protected from the elements all year round runs on the absolute volume of net energy we get from all our sources. It takes a particular amount of energy to heat a house and while you can improve that somewhat with efficiency improvements it's important to consider that you can't be more than 100% efficient in nearly all cases and even if everything in the world was only 50% efficient today that only leaves a factor of two improvement to be had from increasing efficiency.

Centuries of mining for energy has pretty much used up much of the cheap energy. Barring new energy releasing technologies, none of which looks like they'll be ready to scale up to industrial levels within the next fifty years, and drastic reductions in the demand for energy, which signals economic ruin of significant portions of the world, the average cost of a unit of energy will only get more expensive. This means we need to expend more energy in order to extract some amount of net energy.

To date the cost of extracting energy has increased slower than the absolute rate we've been able to increase the extraction of all energy in total. This has meant that absolute net energy has consistently been increasing for centuries.

You have probably heard the term Peak Oil by this point as it has been made mainstream over the last couple of years. Peak Oil is often misunderstood as running out of oil. Nothing could be further from the truth. The essence of Peak Oil is that at some point we'll start extracting oil more slowly each year instead of faster each year. That is, at some point extracting oil will get harder faster than we are able to put more effort into it. Mostly this happens because we are able to put only so much money (or energy since money is equivalent to energy) into extracting oil in one year an as we mine the cheap oil it takes more money to get the next barrel than the last. Therefore sometime when we have mined about half the oil in the world we'll start mining oil slower next year than the year before because mining the oil became more expensive. That's Peak Oil.

As oil is just one part of the energy landscape so to is Peak Oil just one part of a larger Peak Net Energy story. Peak Oil is certainly an important part because oil has been so cheap and convenient compared to other forms of energy. The larger Peak Net Energy story is that as we mine the higher EROEI sources of energy we are left with lower EROEI energy sources. At some point the decreasing EROEI begins to outpace the increasing rate of total energy extraction because all the energy sources are getting more expensive. When this happens the total net energy available to society begins to decrease, that is, it peaks and begins falling to a lower level. Since all the advances of civilization are run off the absolute amount of net energy available to that civilization as we hit Peak Net Energy we start having to deal with the same problems with less energy.

This is the groundwork for why I think the future is grim. These issues are not insurmountable by any means and there are things which can be done to deal with them. I'll cover what those things are and why they are unlikely to happen smoothly in subsequent posts.

Like an Animal

This week I've travelled far to the sunny land of meetings. My favoured dinner on the first night of these trips, after I've travelled better than half the day and then put in a good half day at the office on top, is pizza. It's quick, it's easy, it's heavy to help me sleep and I don't have to leave my hotel room at all.

Pizza is also good because it is an unavoidable fact that I'll have some pizza leftover. This is just the thing to keep around should I be really lazy one night or should the hotel breakfast suck. And this means stuffing a pizza box into those tiny refrigerators.

A small minority of pizza places provide their delivery pizza in an awesome box which breaks up and folds down to become half the size. Just perfect for storing my half pizza in the world's second tiniest fridge. But alas, few restaurants do this. Instead I usually just cut the box up to fit with my trusty pocket knife.

But I flew to the land of meetings remember? Sometimes I'll risk this sharp extension of myself (much in the way my watch and shoes are) to the trials of checked luggage. I haven't lost it yet, but I fully expect there will be a day when I arrive without my luggage and the luggage is never to be seen again. But this time I did not.

You know those flimsy nail files which come with toiletry bags? The ones they include because some Victorian author claimed that it wasn't a true toiletry bag unless it contained the quartet of nail file, nail clipper, tweezers and precision safety scissors. The ones which are so thin with the merest impression of a grating surface as to barely be a file at all. That is what I resorted to cutting my pizza box with.

Some might say I have fallen to the level of a savage, but a savage would never be far from their blade. Savages know the value of a good knife. No, I have fallen even lower, here in the land of meetings, I have fallen to the level of the animals; I have resorted to mostly tearing things where a simple cut would be better and easier.

Never again shall I sink so low.

Summer Slacking Comes to an end

It sure has been quite a while since I last posted. I attribute this mostly to my normal cycle of the seasons. In the winter and spring I'm inside most of the time and at my computer. Mostly I work on various personal projects and tools. However, being inside so much and generally not terrifically busy leaves me plenty of time to consider things. When I consider and ponder I usually end up with something worth writing and do so.

During the summer and fall, on the other hand, I am busy outside and travelling much of the time. This leaves me more of less exhausted and with neither the time nor the energy to do much direction less thinking. With no free thinking I end up with far fewer interesting things to say.

But late October is when I switch around in most years. I'm slightly ahead of schedule this year as I started getting restless and coding on personal projects again this past week. I did this even though I am not yet done my normal seasonal travel. This may bode well for my productivity over the next few months.

I'm hoping to make significant progress with Tachyon. When I left off work on it it wasn't in quite a useable state yet. Linux support was broken and while it was a mostly functional terminal multiplexer it didn't support detaching or any of the bandwidth and latency improvement features which are the entire reason I want to write it.

The summer slacking has come to an end, time to get cracking during the winter of energy.

Text Editor Platforms

Lately there have been a few discussions about text editors sloshing around the Internet. Among this discussion was the concept of a cross platform editor. During the discussion this was meant to mean Linux vs Windows vs MacOSX, but I actually think that when discussing text editors the platforms to note are not the traditional ones. I believe that the platforms of note are:

  • X11

  • Windows

  • OSX

  • Web

  • tty

  • iOS

  • Android

I don't divide the editors into the standard OS, but instead by the windowing system. Looking at my use I think this is a more useful definition of platform in this case. In the past I've used an editor on all of these platforms. Especially important is the distinction between tty and the OS it's running on. I spend most of my editing hours editing on a remote machine, but I still want the same editor when I'm editing locally with a fully fledged graphical windowing system.

To me, the most important consideration of an editor isn't whether it runs on Linux and FreeBSD and OSX, but if it is accessible on the window system I have to use at that moment.

The Wiki Knowledge Future

I think the future of the sum of human knowledge looks more like TVTropes than Wikipedia and a lot less than the array of publishers and unavailable papers today. However, I see the underlying technology to be much different.

Wikipedia is certainly a successful experiment in crowd sourced knowledge. It has millions of articles across dozens of languages. The articles are of varying quality, but the ones of general interest are often of good, if not great, quality. However along with several other frequently discussed problems Wikipedia has I don't see it as the direct descendant of the future of knowledge for three reasons, one technical, one fundamental and one social.

The technical limitation is perhaps the easiest hurdle to overcome. Wikipedia, as it stands, is based upon the complex MediaWiki software. This software is difficult to install, difficult to administer and sharing wiki pages or portions of pages between MediaWiki instances is difficult and time consuming. The sum of this make it extremely difficult to run a fully fledged instance of Wikipedia which keeps up to date and serves a medium-sized population, say that of a single University. As has been show time and time again throughout history a single point of failure, in this case the WikiMedia foundation, will always, given enough time, fail. Then there are the various scaling costs. Surely you have seen he pictures of a downtrodden Jimmy Wales begging for donations to keep the expensive Wikipedia servers running. Wikipedia is, no doubt, expensive to run and as it becomes more popular it'll only get worse. Decentralization would make the total running costs greater, but spread them out over a much larger number of groups, many of which could run their own copy using otherwise idle resources. All this could be fixed, but as it stands Wikipedia is too centralized to scale to the level required to be a store of all human knowledge.

The fundamental issue with Wikipedia is also a relatively simple one to solve, but likely cannot be done without starting over. From the outset Wikipedia chose to have separate language versions. There are some good reasons to do this, but also several good reasons this separation should be avoided. The most obvious reason to avoid such a separation is duplication of effort. If there is an English article on some topic and an article on the same topic in Japanese then at least two people spent time writing more or less the same content, doing the same research. Not only is this wasteful, but it is extremely unlikely that either of the two articles is a superset, knowledge-wise, of the other. Instead the English version would contain some information the Japanese version doesn't and vice-versa. Better would be combining the articles using manual and automated translation to present the text is the languages of choice for the reader. In this way there is only one article containing all the information. Manual translation help would often still be useful, but the canonical source would be available for those needing preciseness.

Finally Wikipedia is not the future of knowledge storage for the simple social reason that its community has chosen to be merely an encyclopedia. At the outset before it was known that the experiment of a crowd sourced encyclopedia would work it was a reasonably narrow to goal to achieve. Since that time the WikiMedia organization has branched out and created wikis for many other uses, such as a dictionary, a quote database, a species listing and even a text library. These are all in line with a store of knowledge, but being separate they fail to be cohesive. Lack of cohesiveness isn't the worst problem however. Intersite hyperlinks have limitations, but they are worlds better than the book references that they replaced. The biggest issue is that Wikipedia and cousins intentionally excludes original thought and original research. This limits all the discourse down to appeals to authority. If you can't find or manufacture a sufficient strong authority to appeal to then that knowledge is ignored and the article itself likely deleted. Within the scope of an encyclopedia this is likely acceptable, though it severely limits the scope and depth of what it can contain. There are many topics for which there is no formal research and the canonical knowledge of the respective community is insufficiently authoritative. Obviously this is unacceptable for a store of human knowledge versus a large index thereof.

Contrast this with TVTropes. While TVTropes has the same technical limitations as Wikipedia with respect to centralization it has solutions for the other two problems. The fundamental language division of Wikipedia is solved in a traditional, but less elegant method by TVTropes. Instead of supporting multiple languages the entire site is in English. That's not great if you don't know English, but at least you never have to read multiple articles on the same topic in different languages to get all you can out of it. More interesting is that TVTropes is not an encyclopedia at all. Instead it's mostly original research. There's a lot of sausage making which goes into this research, but the end result is undeniably amazing. This is positive proof that, at least in technical areas, a wiki based community can accomplish novel research and categorization.

The greatest weakness I see in the TVTropes model is the lack of direct storage of the original source material. I'm sure they would store the original texts and shows and games if they could. Linking to specific examples wherever appropriate. Excepting this exclusion I would posit that TVTropes is almost entirely self contained. Any definition or example required to make distinctions clear is available and often linked within the site itself. This is quite different from the Wikipedia model where you need to leave the site to view a dictionary or often the number of examples included on the site are quite sparse.

Inclusiveness and mechanisms for handling original research are things TVTropes does much better than Wikipedia. These are things which are critical for any storehouse of human knowledge. It does no good to limit such a storehouse to references to old books which almost nobody can actually access for deep answers to subtle questions.

I see such a system as a necessary next step in the collection and storage of all of human knowledge. The current state of affairs is that most knowledge is poorly dispersed. Much of the scientific knowledge is locked away in University libraries, slowing rotting away. There are ample historical examples of knowledge thus stored being lost due to calamities of varies sorts. From short term fires to invaders to changing political whims. A significant portion of the knowledge which isn't locked up inside the heads of practitioners and taught only through hard experience and oral tradition. The continuity of oral tradition is very easily disrupted and there has been, until the wiki, no truly suitable system to give knowledgeable Old Timers the ability to record their informal knowledge for posterity.

There is evidence that the current global society is on the verge of another dark age. Previous dark ages were never global in scope and so knowledge was preserved by other cultures. For example, the Persians kept Greek philosophical thought during the last dark age in Europe such that it could be reintroduced when conditions improved. Even so much knowledge was lost and had to be painstakingly pieced back together by successive civilizations. Roman concrete is one such example. The next dark age is likely to be global in scope so a more thoughtful approach to saving the knowledge is necessary. A digital wiki looks to be a promising way to accomplish this preservation.

Freescript: A Proposal

One major functionality of the open web which Freenet lacks is site specific dynamism. On the open web this is facilitated with Javascript and callbacks to the original web server. For good security and anonymity reasons Freenet strips Javascript out of all freesites before they are presented to the web browser. Some sites recommend local grease monkey scripts or similar to add dynamism to their freesites. Of course many people won't install these scripts and cross-browser support is likely poor.

Currently if you want to produce a dynamic freesite you really have to write it as a plugin, even if, as with Sone, most of the action happens in the web browser using Javascript. I propose that Freenet would be well served by having a mechanism to allow arbitrary freesites to include some measure of dynamism. With a suitable protection model there is no reason generalized freesite apps could not be created with approximately the same functional scope, though lesser persistence, as Freenet plugins.

As a starting point for discussion I propose a bytecode VM which is written in Javascript and provided by the node, not the freesite, to the browser for running the Freescript. A bytecode VM should be easier to implement, easier to audit and easier to apply different protection levels to different freesites. I would suggest a VM suitable for running a Python-like language efficiently.

The reasoning behind disallowing Javascript in freesites still stand. Specifically without restriction it is possible for Javascript to break the anonymity by accessing resources outside Freenet or storing new keys within Freenet with identifying information. As such different protection models would be necessary. As a starting point I would suggest ephemeral (default), isolated, key-wide and insert privileges. I'll briefly describe these below but the general idea is to allow the user to specify the maximum permissions they are willing to provide to any particular freesite, perhaps with a signature ensuring that the code they trusted hasn't changed. Safer settings would be the default.

The ephemeral protection level sandboxes the Freescript into a timeless space where data external to the page can neither be read nor written. When the browser tab is closed all state is deleted. As long as the script cannot break out of the VM this protection mechanism should be completely safe.

The isolated protection level would allow a couple of things beyond the ephemeral level. Most notably it would allow the Free script to access other data within the same container. It would also allow some amount of storage on the node. This might work similarly to how local storage works with web browsers today. It can't store the data in the browser because then it would have poor protection.

The Key-wide privilege level is the same as the isolated level except it's allowed to access any data from the same key as the freesite the script it on.

Insert privileges is the least protected level. In this level the Freescript can access any key and even insert new keys using the node. Such a setting should allow very capable applications, but needs to be used carefully since such a script could completely blow any anonymity one might have.

There might be uses for other modes I've not described and it might be that some modes I have described are not necessary. In any case it would be a good thing if an advanced setting could configure a custom restriction. For example, perhaps a custom mode isn't allowed to know the real time but is allowed to insert keys with a long random delay. Such flexibility would seem to be useful for those advanced users with moderate anonymity needs. Of course those with strong anonymity needs wouldn't give any Freescript insert permissions at all.

Bitrot Free Backups

Bitrot is a great issue affecting archives which most people learn about only after they have irretrievably lost pictures or papers to its effects. This is unfortunate because there is a simple, efficient way to avoid losing data to bitrot, but few seem to use it.

Bitrot plagues all media. Printed pictures fade, stone tablets erode and hard drives demagnetize. Bitrot is simply the partial degradation of bits of the media until those sections become unrecoverable. With low density paper or stone media this was much less serious, you might lose one letter in a word but it doesn't make the entire text unreadable. Digital media is more dependent upon most bits being correct. Digital formats are usually compressed such that changing a single bit changes the meaning of all the bits which follow it. Thus one single error can completely corrupt an image. Most of the image is still there, but without expert knowledge of the format the image is unrecoverable.

Bitrot has two primary sources. The most common is due to accumulated errors in the media it was stored upon. Burnable CD-Rs, for instance, degrade over time from exposure to light, heat, moisture and bacteria. All digital media has built in error detection, but it is of a simple sort because it must be fast and generally applicable. Under normal use it is sufficient, yet can become overwhelmed with accumulated errors in a long term storage situation. Similarly all digital media, such as harddrives or DVD-Rs or tapes, slowly degrade even when not in use. Given sufficient time their built-in error correction facilities are unable to correct the accumulated errors. At that point an erroneous bit appears. It might be that the error correction can still detect the error though it can no longer be corrected, however that doesn't need to be the case. Undetected errors are not uncommon.

As in the analog world some media lasts longer than others. Stone outlasts paper for example. With this in mind there are digital media which outlasts CD-Rs, but they tend to be expensive. On the costly end you can pay to have your data pressed into DVDs or masked into custom ROM. Such techniques will last hundreds of years with basic physical protections. They'll also out cost your car for any significant amount of data. However, even with this stable media you will see bitrot, just at a lower rate. You can't escape bitrot, just delay it.

The second source of bitrot are copying errors. In order to copy a file from one computer to another the bits might be copied dozens of times. Precisely like copying DNA every copy presents a risk of errors. There are error correction mechanisms built in, but they are simple to be fast and can't catch everything. Avoiding this type of bitrot is simple as verifying the new copy immediately after it is made. Simple, but it can double the time the copy takes.

Since you can't escape bitrot you must repair the damage after it occurs. There are three common solutions to this problem. The first is favoured by archival libraries. First you store multiple redundant copies of every file you wish to archive on different media. These days this tends to be a large array of hard drives and an automated tape library. Every file has a copy stored on many hard drives and several tapes. On a regular and frequent basis you check all the versions of every file against the others, looking for changes. This can be done somewhat efficiently using a good checksum algorithm. This method is effective, but expensive. Not only are many media required, but you have to have the man power and automation to continuously check for and fix bitrot. This is easy with only a handful of gigabytes of data, but quickly becomes difficult and ludicrously expensive as the amount of data grows to terabytes in size. The key with this method is to store many copies and check them all frequently enough that bitrot can't cause too much trouble. Both of these things are expensive.

The most common Internet recommendation for protecting against bitrot is a watered down version of the archive library solution. Instead of constantly verifying the archived data integrity across many copies many people recommend a two pronged approach. First all data is stored on multiple media, but where a library would have dozens of copies most people can only afford a handful. Instead of constantly checking and repairing errors most recommend rotating the media out as it ages. For example, having three hard drives containing three copies bought over three years. Every year a new drive is bought and a new copy created. When a new copy is created several of the older archives are checked to get a good copy of each file. The simplest schemes use voting to determine which copy is good, better schemes check a strong hash of the file contents from when the file was new and known good. This is relatively robust, but the few copies and infrequent verification carries significant risks of all copies bitrotting some small amount in different ways. In such a way the data can still become unrecoverable. This is especially true when the data has exceeded the capacity of a single media, whether that was a hard drive, burnt disc or flash drive.

The second prong of the common Internet recommendation is to come to terms with the fact that the above method won't protect everything and that you should fall back to well understood paper archival methods. Basically sort out the most important files and print them. At this point bitrot is much less of an issue and protecting paper for decades and centuries is well understood, but you can' save everything. Videos are impossible and many raw data files have no convenient printed format. There is also the issue of space. Hundreds of gigabytes of digital pictures can be held in one hand. Those same pictures printed would fill a large house to bursting.

The previous two methods do their best to detect bitrot and find a good copy to fix it. In essence they gamble that bitrot won't happen to all copies between verifications. The third method assumes that bitrot is unavoidable, but that it happens relatively slowly and independently on every media. Instead of trying to avoid bitrot it prepares to repair it after the fact. The specific implementation of this solution I use has two components. First each copy of the data has a PAR2 recovery set created for it. This uses a similar algorithm at its heart as most error correction mechanisms on digital media, the difference is that the PAR2 recovery set covers all of the files instead of a single 4KB chunk for each set. If bitrot happens randomly across all the data and the recovery data is configured such that ten percent corrupted bits can be recovered and a single unrecoverable block can make a file useless, then it is more likely that eleven percent of a block becomes corrupt and thus unrecoverable than eleven percent of the entire archive is corrupted. Using PAR2 over all the data together provides better protection than the media error correction at the expense of more computing power.

The second component of my strategy is having a small number of copies of the archive along with its PAR2 recovery data. This is needed for two reasons. The most important is to protect against total media failure. All the recovery data in the world won't help if the media is completely destroyed. Flash drives get lost, hard drives fail to spin up, tapes get turned into party streamers. Multiple copies are the only real defence against these events. I currently store five copies of my backups and it is unlikely that all five would be destroyed within the same short time frame.

Since this method makes the reasonable assumption that bitrot happens independently on different media, that is, which bits rot on one media (such as a particular hard drive) bears no influence on which bits will rot on another media (another hard drive or a cloud storage service for example). Trusting in this assumption for a relatively slow bitrot rate, even if every copy bitrots more than the ten percent which can be corrected using the PAR2 data, it is unlikely that all the same files have bitrotted to the same extent across all the copies. Instead it is likely that using multiple copies it will be possible to aggregate a set of files and recovery data where less than ten percent of the dta contained in them are corrupt and thus recover all the archived data.

I prefer this method because it is affordable, low maintenance and trustworthy. Though I currently store five copies of my backup this system is workable with fewer. I can't really recommend fewer than three full copies, but I have had success with two. If you format your archive correctly then these two copies don't even have to be identical. If you do regular backups of your archival data, say every six months, and your backup is in some append only format, then it's possible to combine the latest backup with the previous backup to get around corrupted data. You can use raw directories of files as an append-only archive format, but PAR2 has a limit on the number of files a single recovery set can cover of about thirty thousand. I personally use a log archive format where new versions of old files are copied later in the file. In this way the first N bytes of the latest archive file match the first N bytes of the previous archive version and so on back through all the versions I store. You perform this aggregation by replacing too corrupted files or sections of files with the same chunks from the other backup. Missing or out of date files will be treated as corrupt data so you don't get exactly the same recovery power as an identical copy, but with effort this can be enough to get you to the point where the recovery data is sufficient. First try this replacing missing data in a (copy!) of the latest backup from the older backup, but if fails you should also try filling missing data in the older backup as well. I've had success using this method to recover a backup off CD-Rs and DVD-Rs where some discs where unreadable. Thus this system is relatively reliable even if simply burning a single copy onto write-once media, such a DVD-Rs, on a regular basis and keeping the last handful of copies. Of course the more copies the lower the chance of data becoming unrecoverable.

The level of maintenance overhead is quite low. There is no continuous verification of the data. There isn't even any verification of old copies when creating a new archive. Instead you just keep as many copies as is reasonable and might have to do some work to combine multiple copies when recovery is necessary. The ability to combine multiple copies of possibly different ages to recover all the data in most cases makes the system trustworthy. There are better systems for other use cases, but this is the best I've seen for the private use case of ample amounts of personal data reliably stored for the longterm with a minimal amount of work or hardware costs.

Private Project Licensing

Licensing intellectual property used to be something the average person just didn't do. They had no reason. Copying was too expensive, too low fidelity and generally not economically feasible. It was impossible to get a book you wrote out to a large number of people and then worry about other publishing rights. The only organizations which could publish already knew what to do to protect themselves and played above board more or less. It's not like they would even know about your hobby paintings unless you notified them.

With the spread of computers like wildfire and the proliferation of software development skills and tools that changed. Now anybody can sit at home, produce useful software and release that software to the world at large. Similarly someone can write a book and put it online to be read. Licensing now matters for the common person.

This is not an article intended to help you pick a license. There are already a good number of sources which do a much more complete job than I would be willing to do. Instead this article is about the general groups of people I understand to exist when it comes to licensing. I'll try to explain some reasoning behind each group. This post won't help you pick a specific license, but it might give you something to ponder before you decide which group you fall into.

The simplest group to describe are those who don't put any license on their creations at all. Not putting any license onto software or writing is equivalent to saying the code is for private use only. You are free to what you will in the privacy of your own home, but don't share your changes and don't even consider doing anything where you might make money. The author doesn't tend to care that much right now, but hasn't made any guarantees about not caring in the future. Often you'll see this on writing or small code projects which the author never believes will be all that valuable to anybody except for short term entertainment value.

A second group might be succinctly described as "Don't Sue Me". This group puts the minimum restrictions on their software, but does construct the license such that users of the code can't sue the original author with the expectation of winning. This is really middle ground between the first license-less group and the next group. Often authors will put this type of license on code which they think might be useful to somebody, but never as significant part of a significant commercial product. They might otherwise have chosen to not put any license on at all, but they've read other licenses and noticed that they tend to explicitly mention the lack of a warranty and consider that a prudent protection to have.

This third group comprises a large portion of the open source world. Though it's hard to objectively compare the impact of code with various licenses I would not be surprised to hear that well over half of all the open source code in the world has licenses which fall into this group. I would call this the "Free-est license" group. The details vary between license of this class but they tend to boil down to: don't sue me and don't claim you wrote this. And that's it. Just about any conceivable use of code and other creations licensed in this way is considered acceptable by the author.

There are varied reasons to choose licenses which fall into this group. Corporate funded development tends to prefer this license because it means that they can use the code however they wish without restriction. If they want to add their own magic sauce and release it as a project they can. However, it also permits them to submit their changes back upstream to put the burden of maintenance upon the community. This often results in a better public code base and lower maintenance costs for the corporation.

Individuals might choose licenses in this group because all they care about is seeing their code used. They don't mind that a company can slurp their code in, put it into a multi-billion dollar a year program and never see any return. Often if the author started in the "Don't Sue Me" group they'll arrive in this group, at least for large projects, as they become aware of the limitations in usage the less comprehensive licenses cause.

Another very good reason to choose licenses of this sort is for reference implementations. Having the clear and unrestricted Free-est licenses for reference implementations improves compatibility and speed of uptake. If any programs which would find the fileformat or algorithm useful can take your tested code and use it as they desire then other have as well, ensuring compatibility. If the reference implementation starts out as a wide spread defacto standard then future incompatibility is minimized since any competing implementations must interoperate with the reference.

The Free-est licenses have some very strong points to defend them as licensing choices. However, for some they are too free since in many cases the code will enter a proprietary black hole, never to be seen again no matter the improvements.

The other huge portion of licensed software and other creative works are with what I'll call the Share Fairly licenses. The Share Fairly licenses are more restrictive than the Free-est licenses, however these restrictions are used to force anybody who modifies (or in some case uses) the software to provide their changes to the public such that they could use or build upon them themselves. Because of this proprietary uses of such code must be carefully segregated from proprietary code. Interface the code incorrectly and you take the risk of causing the license of your proprietary code to become this Share Fairly license.

This group of licenses was originally created to provide freedoms which the Free-est licenses do not guarantee for the users of software. They do this by restricting the packagers and distributors of software. For some this is still the most important reason they choose these licenses for their works.

However not all authors choose these licenses for such high minded concepts as freedom. Other authors might choose these licenses to restrict the one way flow of work that can occur with the free-est licenses. Large corporations will tend to prefer this group of licenses when it comes to collaborating with other large corporations which are competitors in one manner or another. With a project licensed as such each corporation is required to publish their changes for all to see. In such a way they ensure that their investments into the code cannot be unilaterally taken by a competitor who doesn't submit their changes back. Such a one way flow effectively gives the unsharing participant free work. Situations like these would prevent collaboration from occurring for fear of losing competitiveness. In a way similar to corporations an individual author might choose the Share Fairly licenses to force others to return back to the community if they've received value from the software.

Other authors choose Share Fairly licenses not as the final say in the licensing of their software, but instead as a default position in the licensing negotiation. Such authors are displeased at the thought of a company take their work and making a profit from it without compensating the author in some manner, but the author still wants others to benefit from his work. In this case a restrictive Share Fairly license might be chosen as a default minimum compensation, any company which finds the code profitable must provide their changes back to the public. However should a company not wish to compensate the author by making the code better for everybody then these sorts of authors are more than willing to enter into a formal negotiation for different compensation. Such compensation could be a lump sum of money, royalties, employment or just about anything else you could imagine.

These are just the most major of the licensing groups. There are other smaller groups, such as the non-military use groups, but they tend to almost fit into one of the groups above barring some special restrictions. These are also only some of the most common reasons for choosing a license type over another, but by no means are these exhaustive. It is likely impossible to render such a list. One common reason to choose on license group over another which I didn't mention is simply local inertia. If you are within groups which predominately choose one license group over another then you are more likely when you give it little thought, to choose whatever license is most common there.

Such an article as this wouldn't be complete without a note about which camp I fall into and why. I'd likely receive questions in any case, so I may as well answer them upfront. Without other modifying factors I will choose an appropriate Share Fairly license for my personal projects. I do this because I make my living as a software developer and do not desire to see either of the extreme possible results of the other license group choices. I also view the public license as a default negotiation position and am always open to discussing proprietary licenses for compensation. I consider my work useful enough to be worth providing to the public, but it takes real work to produce this software from which I'd like to see some return. Whether this return is in users, patches or cash I care little, but I don't want to see some company consuming my work for their sole profit.

Common Misunderstandings About Version Control

I have an interest in the mechanics and tools used to run software projects. I've used several of the most common version control system and dealt with projects large and small. I like to think that I've thought about the theory and practicalities of version control and like to think I have a solid grasp on version control as a topic. This is why it pains me to read, more frequently than I wish were true, some developer making some statement related to version control which is untrue and detrimental to the use and discussion of version control. I hope to clear up the most common incorrect beliefs in this post. I'll be discussing primarily in the context of Subversion, Perforce and Git. I unfortunately don't have extensive experience with Mercurial, but Git shares several of the same core concepts and together they are currently the best of breed distributed VCSes. Subversion and Perforce are the best of breed centralized VCSes and there are some important distinctions between the two worth noting at certain points. All three together should provide sufficient coverage of the necessary concepts.

To put a face on the prototypical developer who makes the ignorant statements about version control imagine a developer in his early twenties. He's not really worked at a large corporation, but he has done plenty of coding for small personal and consulting projects. He's only ever really used Git and reads nothing but that it's the best. Let's go over the worst of the misunderstandings about version control this developer is likely to have.

No Binaries Checked In


One common belief about version control is that you shouldn't check in binary files. To some, version control is only for source files. When pushed many of these developers will agree that small binary files, such as images for a website, should also be checked into the VCS, but by no means should large binary files be checked in.

Such a view is incorrect and commonly broken in several industries. Firstly it is incorrect in the belief that large binaries, such as the Photoshop sources of those website images, should not be version controlled. There are only good arguments for doing so. Large binaries are able to change just like any source file and other parts of the project are just as capable of being dependent on the large binary file as any source file. It is true that a separate tool could be used to separately version the binary files, but why do that if your VCS is capable of doing it for you right alongside the rest of your project? Why should a video maker use separate tools to track the versions of the original videos and the final renders?

There is then the argument that binary intermediate products, that is binary files which can be produced from other files in the repository, should not be checked in. In some cases, such as source files to object files, this make sense and is often then the context in which such a belief is learnt. There is little to be gained by committing the compiled output of a source file when a compiler can recreate it with no trouble. If creating the intermediate products is cheap then there is no need to commit them. But that processing is not always cheap. It is common in video game development to have not only the photoshop originals of various assets, but also the flattened and compressed versions committed into the VCS. The reason for this is because it would be expensive to pay for Photoshop licenses for anybody who needed to build the game for whatever reason. It might also be time consuming. If it takes five minutes per asset to compile from a Photoshop format to the format needed by the game then a game with hundreds of such assets may gain immensely from keeping the intermediate products. Similarly other intermediate products can take hours of processing to produce.

Even more than intermediate products, it is common in embedded products to not only store the source code of the project, but the binaries of all the tools necessary to build it as well inside the VCS. When working with an evolving tool chain it can be a great aid to be able to go back arbitrary versions and know that you have the matching tool chain. Such binaries can run into the hundreds of megabytes and often make sense to put into the project VCS.


Linux is a Large Project


The previous discussion about binary files brings us to an extremely common misunderstanding about version control. This is especially true in the open source and web development worlds simply due to the lack of exposure. It might be difficult to believe for people who have never worked at a corporation, but in just about every way Linux is not a large project with respect to version control.

Given a little thought this should be pretty obvious. There are many projects in the world, such as Linux distributions and embedded software, which are significant projects on their own which include the Linux kernel as a subset of their source code. Quite obviously these projects must be larger than the Linux kernel themselves. In fact, I would argue that the Linux kernel, with modern technology, is merely a medium sized project.

Consider some recent statistics about the Linux kernel. Version 3.2 has about fifteen million lines of code across about thirty seven thousand files, about thirteen hundred developers take part in each version and they submit about seven patches per hour for inclusion. A checkout size of about 450 megabytes. While the number of developers would rank this as a large project, none of the other metrics do. Fifteen million lines isn't nothing, but it certainly isn't anywhere near the size of projects which include the source of the kernel, glibc, gcc and a few other things you see in a Linux distribution or other MacOSX. Seven patches submitted per hour isn't an impressive number either and can easily be matched by a couple hundred developers working on a single project at any corporation. A checkout of less than half a gigabyte is nothing compared to projects where binaries are stored which can reach multiples of hundreds of gigabytes for a AAA video game.

Big projects are just so much bigger than Linux when it comes to strain on the version control system. Multigigabyte checkouts are normal. Lines of code in the fifty million range or more, including libraries and third party components, are not abnormal. It can be surprising the first time one thinks about it, but consider the case of KDE. It's a well known open source project which is larger than Linux in many respects and it itself probably only barely crosses the large project threshold.


Centralized Mean No Branching


Given the way that some people talk you might be led to believe that before Git no VCS ever supported branching. This is obviously not true and the idea of VCS branching support has been around for at least forty years. RCS had primitive support for it. And yet the belief that centralized VCSes such as Subversion and Perforce don't support branching persists. Often such a view is expressed in such a way that the 'normal' way to use SVN or Perforce is to develop all in the trunk or mainline. While this is a common method of development suitable to some projects, it is by no means the only solution. Much of this misunderstanding comes about, I believe, because the DVCSes have chosen to solve an easier form of branching than the centralized VCSes. I'll discuss the distinction in the next section. But first I'll show how branching is a critical component of every modern VCS, centralized or not.

For that purpose I'll define a modern VCS as one in which there is a checkout which is separate from the files as committed into the VCS history until such time that the developer consciously commits changes from the checkout into the repository. Many VCSes satisfy this definition, Git, Subversion and Perforce included. Now consider the situation where a developer has a checkout of some branch and makes some changes. Before committing a colleague commits some other changes into the same repository, assume for the moment that if using Git it is actually the same clone on the same machine in the same branch. Now before the developer can commit their changes they need to first bring in the changes from the repository and then merge their changes on top of them.

In such a situation the checkout is a branch with a maximum commit depth of one. That is, the checkout can be considered a branch which only ever has a maximum of one set of changes in it. When bringing in changes from the branch a merge occurs. This is the simplest way in which branching is a capability of every modern VCS, including the centralized VCSes. Of course Subversion and Perforce have documented and battle tested branching and merging on a larger scale.

Sometimes when a developer exclaims that centralized VCSes don't have branching they really mean what is termed local branching in DVCSes. That is, within a developer's own clone they can branch as much as they want and make commits into those branches as they see fit. It is not required of centralized VCSes, but common for a repository to either allow developers to create branches as they see fit or to have areas within the branch namespace explicitly for developers to create private branches. The end result is nearly identical in both cases. The only distinction is that DCVS local branches can remain hidden from everybody but the creating developer while private branches in a CVCS can only hide in plain sight.


One Way to Branch


As I mentioned earlier most current DVCSes have decided to solve an easier branching problem than most CVCSes. If you've only ever used a DVCS then you might be underinformed and believe that there is only one way to branch. The branch-the-world philosophy of DVCSes is certainly simple to grasp and easier to make easy related to merging and keeping track of branches, but it is not the only way. Subversion and Perforce are more flexible in this regard and support branching directories.

At first it isn't clear why you might want to branch beneath the root of the repository. Consider an embedded project based on Linux. Such a project will have a copy of the kernel, a copy of the C library and some other bits of code. Now if a new system call were added to the kernel to be used by the application code then the C library might be updated to support that system call as well. Such a change would be useful to go in all at once to ensure that you never have to deal with mismatched versions when compiling. However, since these are separate components most of the time it is also useful to build each as an RPM. Since these components also take a long time to build if you are working on the application why should you have to rebuilt the kernel RPM? One solution to this problem is to branch each package separately depending on which changes you require. Only a system which allows you to branch directories and not just the repository root give this ability.

Consider a further case where you have some company-wide documentation in the same repository as the project. It's often not useful to branch that documentation when you branch the project, build server configuration doesn't branch with the project, but it's still worthwhile to be in the same repository for other reasons. Such setups are impossible in branch-the-world models.


DVCSes Are Special


This is perhaps the most annoying misconception which comes from inexperienced VCS users who have only really used DVCSes. They often believe that DVCSes are special in ways which aren't true and ignore the few ways in which they differ. The only way in which DVCSes are fundamentally special is the distributed aspect. Current DVCSes keep a copy of the entire history in the local clone. This is good in some situations, e.g. on an airplane, and bad in other, e.g. the history is 500GB in size. It is dependant upon the situation whether a DVCS is better than a CVCS or not. Beyond this there is nothing special about DVCSes. Everything you can do with a DVCS you can do with a CVCS with more or less trouble. Similarly everything you can do with a CVCS you can usually accomplish with a DVCS with more or less trouble.

Version control is all about keeping track of changes. The theory is simple, you just have to look past the implementation details of the VCS you are using and think about the high level operation you are trying to accomplish. Do so and you'll be significantly less likely to misunderstand version control.


Hacky Simulation

Xug. I completely forgot about my final Evolutionary Societies assignment that's due tomorrow. Ok, let's see how much trouble I'm in.

Simulate the effects of an energy peak on a Type 1 civilization. Note: This will take a day on the campus supercomputer so schedule your slot ahead of time.

Great. I didn't schedule a slot and the last one is surely taken. That's bad. But look, it says nothing about the simulation fidelity. I'm smart, surely I can take some shortcuts and get the simulation done in time. Better at least try right?

So let's see, Type 1 civilization. So limited space travel and probably no planet splitting. I can work with that. At least I'm not simulating multiple solar systems. Ok, solar system size, smaller is better but it must be big enough to not take too long. Medium sized star it is then. Need a planet of course, let's put them on the third one for fun. In the Carbon Zone of course, I need this done quickly and there is no time to wait for Silicon life. No point in simulating more than the single solar system so we'll limit the simulation to the volume of the solar winds. I like space travel so let's make sure there's a reason for these beings to go into space. I guess that means they need to be surrounded by rocky worlds, but you really do need interesting large worlds to go to space, so I'll put a few of those out there. Just to keep them guessing I'll put an asteroid belt between them. Gotta remember the energy peak though. Let's put make fusion impossible for them and put the organic peak a bit after 4.5 billion years. Add some randomness so it doesn't because obvious to there philosophers. And voila, one standard carbon life solar system. Now which corners to cut.

First the big stuff. There is no way I have time to simulate to the real quantization. I guess I'll do a hundredth of real. 10-35 should be lots. It's not near the 10-3500 of reality, but that just means that there computers will stay large, slow and power hungry. And who knows, that puts quantum effects close enough in size that maybe there will be some interesting biology out of it. Plug in the standard quantum model and I save a whole bunch on the time quantization too. Let's see how long that will take to run, say, five billion years.

A year. Xug. Man the supercomputer must be fast. Time for more cuts. Let's see where all the time is being consumed. Hmm, that's a lot of memory used to simulate the universe within non-zero gravitational effect. That's easy to fix though, just limit gravity to the speed of light, they shouldn't be reach the point of gravity control soon enough anyways. That means I can just simulate the universe delayed. That'll save a ton of memory. Not a real huge time saver though. Ok, Type 1, think really limited. No planet splitting so I can get real coarse with the internal simulation. Screw the quantum model, let's go with trivial fluid dynamics for the cores of planets. Simulating the first 10KM of crust should be lots. I don't even have to be that accurate since they won't get that far so let's use graduated accuracy starting at the space quantization 10KM down and going to the cubic kilometre at the centre. You know, let's go crazy and only be that accurate for the planet with life. The rest can just live with fluid simulation starting at 10-5 and going down to hundreds of cubic kilometres for the larger planets. If anything leaves the planet I guess I'll have to up the precision there, but that's no big loss.

So that's the bodies, but most of the solar system isn't body, it's space. If I follow the same kinda deal with space why not reduce it's precision dynamically too. It's mostly empty so I can do a 10-20 simulation for most of that at a reduced time granularity. Sure it'll have physics artifacts, but it'll all balance in the end nearly immediately. Keep them guessing. Now how long?

One month. Closer, but I still need some big gains. Let's take a look at the physics engine options to save there. Unobservable approximation. That's an option I was hoping to avoid, but there isn't really anything I can do about it. My little species will just have to live with being unable to unify physics in the large and physics in the small. That's the unavoidable when you use two different physics engines depending on the energies and precisions involved. It'll be close, but they might not even progress to the point where they'll notice. I really really hope that gets me there. I don't know what else I can cut.

A bit less than a day. What time is it now? Just after lunch, perfect. As long as my little species don't push extensively past the boundaries of their planet too much it'll be done with a couple of hours to spare to throw together some BS observations and charts.

Sometimes being a third year student means doing a rush job.

Many Fueled

Lithium coin cell. Twelve volt lead acid starter. Gasoline. Lithium-ion rechargeable. AA alkaline. Wood. White gas. Propane. Butane. Diesel. AAA alkaline. Food. It takes a lot of different types of fuels to run a modern autumn camp. Too many types to be honest. It's a far cry from where camps started.

Wood and food used to be the fuels of the day. Perhaps with hay thrown in if you were well off enough to have some beasts of burden with you. Wood was found locally, though you often had to bring in at least some of the food and hay. This wasn't what you'd call convenient though. Having no truly portable light to travel by and needing to burn down a fire before you could cook.

At some point lamps and candles where added to the mix. Not precisely light nor bright, but easily carried light is a valuable thing. The sweet spot really came about with pressurized petroleum stoves and lanterns. No more stumbling around at night and you could be cooking in a couple of minutes on the stove versus half an hour or more on wood coals. Wood, food and white gas. Three fuels for all needs with one locally gathered. A camp setup like that would look almost modern.

But then things started to get complicated. Make no mistake, what was lost in fuel simplicity was more than gained in convenience. Flashlights sure beat a lantern for walking around and packing on a hike. Propane makes cleaner and easier to use stoves and lanterns. With white gas you can be cooking in a minute or two; with propane it's literally seconds. Matches are nice and all, but butane lighters are really nice when they'll do the job. And I'm sure few will want to trade their ATV or truck for a horse.

Convenience sure can be a pain sometimes though.

Practical Bear Safety

Bear safety in the woods is an important matter. If you are lucky you only have to deal with black bears. If you are unlucky you not only have to deal with black bears but also grizzly bears. In either case you should be safe. Here are some practical, if not widely advertised, tips for keeping your camp bear safe. These tips are not for a light hiking camp, but more for a semi-permanent camp with several people where you'll be staying for more than a week.

  1. Have a fire. Fire is perhaps the most critical component of keeping a camp bear safe. With a fire you should burn all the garbage you can. This is most obvious with the food wrappings and food scraps but food cans can also effectively be burnt. The heavy food cans can be put in the fire to burn off the food residue and after they have cooled kept to bring back to a recycling centre back in the city.

    Just having a fire burning, especially through the night, will also serve as a deterrent, though not absolute deterrent, and keep wildlife away from your camp.

  2. Burn off your BBQ. Similar to the idea of fire you should ensure that any cooking surfaces are well cleaned before being put away. With pots and pans this means cleaning them shortly after finishing your meal with water and soap. For BBQs and other grills you should run them at a high power for a few minutes after you are finished cooking, scrape them and then run for a few minutes more. If it no longer smells like food it won't attract bears after the cooking smells have dissipated.

  3. Mark your territory. Animals are intensely sensitive to smell. Use this to your advantage. Make sure to urinate around the perimeter of your camp on a regular basis. It helps if you are eating foods or consuming drinks which add, let us say body, to the urine. This isn't an extremely strong deterrent, but you'll want to do it anyways because walking in the dark is hard so it's good to have an excuse.

  4. Drink beer and pop. If you can't prevent a bear from coming to visit your camp you can at least get some warning that they've arrived. Drink copiously and use the empties as an early warning system. This is especially useful near the areas where food is stored or cooked. If hunting you should have a pile under any hanging harvest. Ensure that when you leave you collect all the empties to return to help pay for the next case of beer.

    If you have somebody known to snoring ensure that they get an extra measure in their cup. It'll most likely make them snore all the louder and who wants to come near a chainsaw running at night?

  5. Have a Designated Teetotaller. Though you should drink so you have the empties to use as a warning perimeter there should be at least one, but preferably several people who aren't drunk and can handle a bear should one appear. This doesn't actually mean that they can drink at all, but if they mustn't drink to the point that they wouldn't drive.

  6. Sleep with a Gun and a Big Flashlight. All of the above are really only deterrents to bears. Nothing will really stop a bear which wants to come visit your camp. In that case there is only really one thing to do. Having a gun safely at the ready. You'll need the flashlight since it'll be dark. It might take two people, one to carry the gun and one to carry the flashlight if you don't have a headlamp.

    Try and have the designated teetotaller be the one using the gun.

    Should things get this serious don't bother with a warning shot, the bear will either not understand it or simply come back later. Go straight for the chest shot. All the better to put two in there to be sure. Never go for the head since you'll either miss or, in the case of a grizzly, just bounce the bullet uselessly of its forehead. In the morning you must remove the corpse to some distant dumping spot.

These aren't your standard bear safety tips, but they'll get the job done if you are in a situation where the normal safety tips are not practical. Just keep in mind that in the woods the bear is a top dog and you are beneath them on the food chain.

Golden Age of Burgers

Imagine the year is 1958. You are a 17 year old boy living in a medium size American city. It's a warm Saturday afternoon and your father has lent you his car for a couple of hours. Life is pretty good. So good, in fact, that you decide to use your gasoline powered freedom machine to visit the local burgershack with your friends.

Hamburgers likely existed in a similar form before this idyllic Saturday, but this may just be the perfect time in history to go out for a hamburger. The War is over, post-war prosperity has arrived in America and many of the previous hardships have passed. Even more importantly, burger optimization which will eventually drive the local burgershack out of business and replace the burgers with limp, dry imitations, has yet to come.

You and your friends cruise the relatively empty streets with sidewalks full of people. You pull into the burgershack and order a burger and milkshake. You pay with the money from your part time job. This is truly the golden age of burgers.

The mass of fiction and fevered dreams above was brought to you by Fatburger. Burgers how I imagine they were before fast food meant sixty seconds or less.

Sorry but the Answer is Never

Technology is a wonderful thing. It gives us previously unimagined abilities. As one example who would have predicted, even a hundred years ago, that I could write you today from the comfort of my home on a handheld device such that within the day many people can read it and not only respond to me but have an active discussion with me on the matter. Even more amazing is that this would happen over a system where it is impossible , in theory, for any participants to discover the identities of any other participant. Or that I could, with a minimal investment, use machines capable of completely reshaping huge swaths of the Earth's surface. Some days it seems that technology can solve all our problems and that flying cars are just a matter of time. So when will we get our flying cars, four hour work weeks and vacations in space for the average person?

I'm terribly sorry to tell you this, but the answer is never.

Technology is a powerful tool with one unavoidable requirement: Energy. Technology can be evolutionary and reduce the amount of time or energy a task consumes to complete, such as the backhoe. Technology can be revolutionary and fundamentally change how a task is performed to skip a step, such as the case of microwave ovens. Technology can even convert one form of energy, say coal, into other forms such as heat, light and motion. Technology can do a great many things, but technology is not, and cannot create, energy. Instead technology must always consume energy. There is an undeniable minimum quantity of energy required to heat a cup of water, light a room or move one person of mass across town.

Of course better technology can get closer, but we've been working toward using less energy for many tasks fora couple hundred years so we are already pretty close to these minimum limits.

None of this would be a problem if we had plentiful energy. With plentiful energy we need merely spend more energy per person to keep improving our abilities. Alas there are only so many different sources of energy, as I've outlined previously, and a major few are on the cusp of decline. These energy sources are collectively known as fossil fuels and the most popular term of the decline of these is Peak Oil.

Peak Oil has nothing to do with running out of oil, you will always be able to buy a barrel of oil or gallon of gasoline. You will always be able to buy these in much the same way that you are able to buy premium foodstuffs today. You might not be able to afford eating the finest of steaks with all the fixings day in and day out, but most can scrounge together sufficient funds for special occasions.

Peak Oil is not truly about oil at all. It is really about energy. Oil is mentioned merely because it is such a huge portion of the world's energy production and is extremely well studied in its production and decline rates. And because it has been on the minds of thinkers since the North American oil shock of the 1970's.

Peak energy is why you'll never get a flying car or visit space. Once you pass peak energy the capabilities of the person of average means begins to decrease. It'll start slowly with the most energy intense activities becoming too expensive first. Perhaps it'll be that cross country touring vacation for which the fuel would cost too much so you opt to fly to a single city and drive while there. Or perhaps it's not buying a bigger house or a new car every five years instead of every three. It'll start slowly as activities which require the most energy are priced out of reach. The technology will still exist and some will be able to afford them. Money is just a metaphor for energy so as the amount of productive energy in the world starts decreasing so to will the things the economy is capable of sustaining.

There are many who will deny limits to technology and it is true that technology can convert energy from, for example, light to electricity. But how much of our limited stock of energy must go into manufacturing this technology for what return? Every believer that technology will save us from a decreasing energy supply should ponder deeply where the energy to run this saviour technology will come from.

Peak Oil is Peak Energy. Peak Energy is also Peak Ability, but Peak Ability is not Peak Technology. I'm sorry you won't get your flying car, but you might yet get the four hour work days of you great Grandfather's time.

How to Use a Thinking Machine

Computers are wonderfully useful machines full of possibilities to make one's life easier. Unfortunately most people don't know how to make the best use of their thinking machines. They only use the simplest features of the software they buy and perform many manual steps which the machine could do for them. In short they are insufficiently educated on the key to the most effective use of a computer.

The biggest key to using a computer to the fullest is that you, dear user, should use them in such a way as to forget as much as possible as soon as possible. You must let the computer do the remembering and thinking wherever practical. How to do this will be clearer with some examples.

First consider the case of email. Many people read their email, letting that email become marked read, and then have to remember which messages they still requires a response or some action be performed. This is a situation where remembering can be pushed off to the computer. Messages which don't need an action should be separated from messages which need to be read or acted upon. Move them to another email folder.

Another email case occurs with finding old email. Many email programs have become quite good at searching for old messages these days. There is no need for anything but the coarsest of email filing hierarchy. Just throw the email into a large bin after finishing acting upon it. Perhaps this is one folder per project, perhaps this is a single folder named 'Old'. It doesn't have to be complicated.

The previous two examples are more about using the existing features of the software in an intelligent way to have the computer do the remembering for you. Beyond remembering and communicating, which computers can not yet make significant savings other than speed of delivery, computers can think for you. This requires simple programming and scripting. Those of you using Windows will find this more difficult than on other systems, but it can still be done. Writing dirty scripts in bash or python or make are great ways to teach the computer how to do some thinking for you. If you are clear in how you write these scripts you can them forget how to do those tasks entirely.

Writing applications has a well deserved reputation for being aggravating because users demand flexibility and that all the sharp edges have been sanded off. Writing a basic tool which performs exactly your own task and nothing more is several orders of magnitude easier due to the specificity and ease of modification. When what you need done changes slightly you can change the script to match with ease.

Computers are also known as thinking machines for a very good reason. When used most effectively you can offload tedious but significant parts of your brain to the machine, Thus leaving you to have a less stressful and more productive life. And productivity means more cold drinks with friends.

Threat Models. Security Researchers and You

Your communications are insecure. Criminals and governments the world over can read your emails, what you read on the web and watch you as you bank online. This is not your fault, it is the fault of security researchers. Security researchers reading too many cold war spy thrillers as children has left you without effective communications security. Your communications are insecure because security researchers are using the wrong threat model.

Threat models are the axioms, the base assumptions of the security world. They are used to design the security system by making defining what is being protected and what the attacker is capable of. Consider two threat models for a criminal organization owned warehouse. In the first model the crime syndicate has the warehouse in a country with a strong rule of law and is only worried about other criminals breaking into the warehouse and stealing the illicit goods within. In this case it is probably sufficient to make it known that they own that building and then have standard locks to deter the pettiest of criminal and cameras to identify any successful thieves. Those thieves can then be dealt with extra-judicially after the fact as a warning to the next would be thief.

The second model to consider is that same warehouse of illicit goods, but this time the police are militarized and the rule of law weak. In this case the warehouse must be defended against military raids. Locks and cameras are useless but blockades, reinforced doors and armed guards are more the call. This is quite different from the first case where all this extra security would be detrimental to the security goals. These two examples show that security assumptions are critical in having effective security choices. Choosing the wrong security model can result in less security then no protection at all. The crime syndicate would immediately lose their goods to the police if they had armed guards.

It is no different in the realm of communication security. Choosing the wrong model can expose you to insecurity or make the cost of the security outweigh the benefits. Consider two of the most prolific threat models used on the Internet, Cold War Spy and Faith in Government.

The Cold War Spy threat model is a favourite of the security community primarily for historical reasons. Over the past century, only governments did communications security and did so primarily under the guise of the military. Consequently the threat model discussed was usually some variant of: anybody can be a spy, assume every line is tapped by the enemy, leaking even one message will result in somebody dying and all users will be well trained because it may be them who pays the ultimate price for mistakes. All very reasonable if you are a military commander trying to plan an invasion and keep your spies alive. Not so reasonable if you are trying to trade emails with your friend across the country. This model is typified with the PGP model of security.

The popular alternative is the Faith in Government threat model. In this model government can be trusted. They can be trusted to correctly vet every key and certificate or delegate to equally trustworthy entities. This sounds pretty reasonable right? Your government follows the laws and doesn't need to make things insecure since they have legal methods of tapping whatever they want. Reasonable except that it isn't just your government which is being trusted and it isn't just all the employees in your government and delegates, it's every government. Corruption and privacy laws may be strong in your country, but can you say the same about China, Syria, Lebanon or Somalia? Can you trust that the certificate issuer of Russia doesn't owe somebody a favour they can't refuse or want a new luxury car? Of course you can't. This model is typified by SSL, the only real encryption used on the web for online banking and commerce.

These models leave the average person vulnerable because they ignore the situations average people find themselves in. Cold War Spy communications security is too hard and cumbersome. Nobody will die if one random email message of mine is broken. My lines are not usually tapped, especially since I move between network connections several times a day. Most people who come in contact with my messages couldn't care less and will merely do their job and otherwise ignore my messages. Cold War Spy security just isn't applicable to the average situation. The Faith in Government model is equally flawed. While I may trust my government because they can subpoena whatever they want anyways, I certainly don't trust their third rate corporate delegate run out of a derelict warehouse. This is to say nothing of trusting the governments, government officials and private persons in the various high corruption areas of the globe.

So most communications are vulnerable to criminals, foreign governments and corrupt local government employees. It doesn't have to be this way, but security researchers don't have your threat model in mind and are unwilling to accept the necessary compromises to protect them.

World of Diminished Returns

For the past century the world has been on a trend of rapid improvement. A small or moderate investment into creating new technology, such as the automobile, or more widely distributing an existing one, such as electricity, reaped moderate or large benefits. The benefits of previous investment would be compounded with the next invention. Thus the modern world reached its current fantastic pace of progress. The power of compound improvement and free flowing invention.

Alas, then I look out upon the modern world I see examples of diminished returns all around me. I do not mean in the financial investment sense. Instead I mean in the improvement sense. Any improvement which is put into place now is done at a moderate or large cost for a small or moderate gain and more often than not is not done at all because the costs outweigh the benefits in any rational evaluation.

I see this and I find it worrying. The culture of the world has come to expect and depend upon rapid progress. It will do whatever it can to maintain it, even if the costs rise to illogical highs. Consider car safety. The number of deaths due to accident has been on a pretty consistent decline for decades. Some if it has been better roads, some better cars, some better drivers. But at this point how do you reduce the number of accidents? You can make all roads divided but that is immensely ugly and retrofitting existing roads prohibitively expensive. You can't just claim an extra two lanes of space for every road in an existing urban area. Most of the roads are already there and hemmed in so a small increase in safety due to better road safety is shockingly expensive. You could look at better cars, but physics doesn't really allow significant improvement and the inertia of existing cars on the road makes any improvements take ten years to have a noticeable effect. Where do you go after air bags and crumple zones and ABS brakes? Where do you go after commuter cars with the performance of a 70's supercar? There really isn't a lot of room left to squeeze more braking power out of wet roads or more crash avoidance out of tired human drivers. So you move to train better drivers. Now you can't revoke the license of all the existing drivers, so you train new drivers longer and better. In twenty years you might see better drivers, or maybe all that training has been forgotten in five years and you've just made it more difficult for young people to get driving licenses. This delay has negative consequences for the economy since cars are such a huge fraction of the economic activity in the developed world.

There is some work in partially replacing the driver, but it's unclear if a partial replacement will actually increase safety. Removing the driver from concentrating on the normal driving and expecting them to jump in when a situation the machine can't handle comes up sounds ludicrous. There doesn't seem to be any cheap place for increasing car safety anymore. It's no longer as easy as not having a solid steel steering wheel.

I believe we are in a world of diminished returns, but I don't think the wider public has come to accept that. I don't know that the world can continue to pay the societal cost of constant improvement. I don't know that the world will be able to accept a reality where things can only get worse.

Investment

The modern global economic depends upon investment to operate. Money is invested, products produced and sold, profit returned. Many people know this. However I would argue that the majority of people who don't work in business or accounting or the financial services don't intuitively know this. Instead they only understand the abstract version of this, financial investment. This is greatly to their detriment.

Financial investment is the most advertised type of investment. The ads equate putting your money into their funds and pulling out a small fortune in twenty years time. This is more or less harmless as is, but is the kernel of the disease of misunderstanding. This is most evident where-ever a housing bubble is inflating. Some people go ahead and buy a house, perhaps fix it up and then quickly sell it for a profit. Once this becomes a trend people see the price of their houses going up. Suddenly houses are an investment. Put your money in, irrespective of whether you live their or not, and in a few years cash out with an extra 20-30-50%.

Unfortunately this misses out on the form of investment which is more useful to the average person with a finite budget and never as much money as they could use. I will term this kind of investment productive investment. Productive investment is paying money for depreciating assets which have a positive return on investment. Take education as an example. Suppose that a degree or diploma will cost you $10,000 once tuition and books are taken into account. The longer you have held this degree the less it is worth. The certificate may help you find a better job shortly after you graduate, but ten years later it's mostly your experience in the field which helps you find another job in the field. Thus a diploma is an asset which decreases in value an time goes on. If the new job is better and pays more then it's likely a good investment.

One example of a borderline case which is often argued as never an investment is new car. While it is absolutely true that a car will never be worth more than you paid for it that doesn't mean that it cannot be an investment. Consider the case where you already have a quite old vehicle, say a thirty year old beater. It is feasible that conditions could be such that the difference in price between a used car a few years old and a new car would be five or ten thousand dollars. If you intend to keep the car for many years, a decade at least, and the new car is very fuel efficient then it may be worth the extra cost. It then comes down to math. The extra vehicle cost versus the additional reliability, important if you drive often for a living or otherwise have a lengthy commute with no affordable alternatives such as transit, reduced repairs over the next handful of years and reduced fuel costs. It is especially important that you take into account the higher future cost of fuel since you may only be saving $0.50 per 100 KM now, but in ten years fuel could triple and you would then be saving $1.50 per 100 KM.

Of course these numbers ignore the less tangible benefits such as reduced stress, greater happiness or a vehicle which fits your needs better. And of course the best transportation investment is not having to own a car at all since almost no personal vehicle ever fully pays for itself, though the cost versus a more used vehicle may not be as great as it first seems.

Hopefully I've made clear how many things which cannot ever be sold for more than their purchase price can be viewed as investments. This is how business looks at it. You have to spend money to make money, but you also have to spend money to save money. Paying for efficiency and longevity can be worthwhile investments. Longevity especially so since a $20,000 purchase depreciating to nothing in 25 years is less expensive than a $10,000 purchase which is worth nothing in eight.

Not The Stat You Are Looking For

People seem to have a fetish for Life Expectancy. Not a measurement of how long they'll live, that would actually be useful, but instead the formal statistic. This comes up wherever a discussion of progress, now versus the middle ages for example, the rate of progress, has invention hit diminishing returns yet, comparisons between countries, why it sucks to be in the developing world, and many others. Champions of improvement point to the massive gains since the second world war, moving from 64 to 80 today in the developed world. People arguing that life in the middle ages was brutishly short point to the thirty year life expectancy.

All of these are strong arguments based upon a misleading statistic. Formally Life Expectancy is the mean age of death of every person in a particular time and place. Though at first blush this sounds like what you want it really isn't. It is heavily biased towards measuring the deaths of infants and young children. If a certain life expectancy is low then it is almost certain that infant mortality is high. However, the infant mortality rate doesn't matter that much except in raising life expectancy. If an infant is born and doesn't make the week that's a shame, but isn't a useful measure of what age adults tended to die, either of natural causes or not. To get at the latter you'd want a statistic more like the median age of death of any person over fourteen. The latter could be obtained using something calculated like life expectancy, but excluding anybody under fourteen. Unfortunately these useful statistics are not commonly available.

Life expectancy does have its uses, but they are much narrower than how it used as the end-all measurement of how long the average person lived. There were still people who commonly lived into their sixties and seventies in the middle ages. Even though the life expectancy was nearer to thirty years than eighty. Don't base an argument on life expectancy, it just doesn't mean what everybody thinks it does.

Annotated Tour: bash

On occasion the topic of shell configuration has come up in discussion in one regard or another. Often I have some useful tidbit in my bashrc which others don't know about. I have thus decided to begin writing up an annotated tour through the configurations for the various tools I use on a daily basis. This is the first in that series where I cover what's in my bash configuration.

Let's start from the top.

# First unpack $TERM because it's the only effective way to move arbitrary environment
# variables through ssh to arbitrary hosts. The format of the modified string is:
#   realterm;flag1,flag2,flag3
# Each flag will be set to it's own name as it's value. Thus, "xterm;USE_FANCY_KEYBOARD"
# will result in:
# $TERM=xterm
# $USE_FANCY_KEYBOARD=USE_FANCY_KEYBOARD
EXTRAFLAGS=${TERM##*:}
export TERM=${TERM%%:*}

if [ "$EXTRAFLAGS" != "$TERM" ]; then
        IFS=',' read -ra ENVFLAGS <<< $EXTRAFLAGS
        for flag in ${ENVFLAGS[@]}; do
                export $flag="${flag}"
        done
fi

From what I've been able to determine there is only one portable way to move environment variables from one machine to another via ssh. While you can configure ssh to copy particular environment variables when logging in it requires configuration changes on both the client and the server. Since that isn't portable I stick my environment variables, only flags at the moment, by stuffing them into TERM which is exported by ssh by default. This code here unpacks the encoded TERM and sets it to the real TERM value.

# terminal configuaration options:
case $STY in
   *pts*|*tty*)
      session_name=`sed 's/.*\.//' <<< $STY`
      ;;
   *)
      session_name=`sed 's/[^.]*\.//' <<< $STY`
      ;;
esac

This code here extracts the screen session name. If there isn't a user set session name it extracts the hostname.

case $TERM in
        xterm*)
                TITLEBAR='\[\e]0;\u@\h: \w\007\]'
                ;;
        screen*)
                if [ ! -z "$session_name" ]; then
                    TITLEBAR='\[\e]0;[${session_name}|${WINDOW}] \u@\h: \w\007\]'
                fi
                ;;
esac

With the screen session name I can then put that and the current window into the terminal emulator titlebar like "[daredevil|6]". I use this to keep track of which session and window number I am in as I move between several sessions often and create and destroy windows regularly.

# In progress work to detect whether the terminal is light-on-dark or
# dark-on-light. Very useful for things with colour. Would also be useful on
# odysseus.
if false; then
    if [ -z "$DARK_TERM" -a -z "$LIGHT_TERM" ]; then
        dark="1" # Default assumption of a dark on light terminal

        # Terminal.app
        dark="0"
        colour=`osascript -e 'tell application '\"Terminal\"' to tell the front window to get its normal text color' | sed 's/,//g'`
        for rgb in $colour; do
            if [ $rgb -gt 32000 ]; then
                dark="1"
            fi
        done

        # Xterm
        # echo -e "\e]11;?\007" will return something like
        # \e]11;rgb:rrrr/gggg/bbb BEL

        # Screen inside xterm
        # echo -e "\eP\e]11;?\007\e\\" will return as above. How can one detect
        # screen in xterm and is it even necessary?

    fi
    echo $dark
fi

This is some work in process code to detect the background colour to give other applications I use a hint as to what colour scheme to use. It isn't used because there is no general way to determine locally, let alone through ssh, the background colour. This is a hole in the traditional terminal information model and isn't helped by the fact that most emulators claim to be some variant of xterm, even when they obviously aren't.

# Preparation for system specific configuration. These are the interm aliases
# necessary so that the OS specific command names can override them if
# necessary. The primary example is that the ls with colour is a different
# command on NetBSD than on Darwin and Linux.
alias __ls='ls'

# Common configuration options:
export PATH="$HOME/bin:$PATH"
export VISUAL='vim'
export EDITOR=$VISUAL
shopt -s histappend # append to history instead of overwriting it
export HISTFILESIZE=100000
export HISTSIZE=100000
export HISTCONTROL=ignoredups
export LESS="-R"
export PAGER="less"
LS_OPTIONS="-h"
GREP_OPTIONS_DEFAULT="--exclude=tags --exclude=ID"
_GREP_OPTIONS=$GREP_OPTIONS_DEFAULT
NCPU=1 # Default to one CPU

# Not all greps support --exclude-dir
GREP_IGNORE_DIRS="--exclude-dir=.svn --exclude-dir=CVS --exclude-dir=.git"

These are all my default settings which are more or less portable across all the OSes and systems I use. Of interest here is 'histappend', which causes bash to append its history to the history file when closing the shell. This means I don't lose my command history when closing a shell, though sometimes the history I am looking for is further back than I expect. It works fine for opening a new shell and wanting the history of the most recently closed one though. I also set the ignoredups option which doesn't save duplicate command lines, such as if I do a manual watch with the same command for some reason.

Another setting of interest is -R for less. This setting makes less pass through colour escape codes. This is most useful when using grep colouring as configued below.

Finally I setup options to ignore VCS directories by default when grepping around.

function pwd_len_limited {
        local pwdmaxlen=20
        local pwd=${PWD/$HOME/\~}

        if [ ${#pwd} -gt $pwdmaxlen ]; then
                local pwdoffset=$(( ${#pwd} - $pwdmaxlen ))
                newPWD="#${pwd:$pwdoffset:$pwdmaxlen}"
        else
                newPWD=${pwd}
        fi

        echo $newPWD
}

This function takes the current working directory and only returns the last 20 characters of it. $HOME is automatically turned into ~ and any path which is too long is prefixed with #. I find this as useful as having my full path in my prompt, but without the various disadvantages, such as a path longer than my terminal is wide, which having the complete path in the prompt entails. In practise the width of the displayed path is normally always truncated so it doesn't introduce much variability into my prompt size.

function is_vim_running_locally {
    if ps -T -o comm | grep '^vim' &> /dev/null; then
        # vim is running in this local terminal
        echo -n "&"
    else
            if [ -f /p4conf ]; then
                    echo -n "%"
            else
                    echo -n "$"
            fi
    fi
}

This function sets the last character of my prompt. Normally it is the standard $. If this is a shell started from vim it will be &. At work we access the crosscompiler toolchain inside a chroot so the prompt is % inside that chroot. The indication that I am inside vim is especially useful to prevent me from editing a file, starting a shell to run some command and then forgetting I was in vim and start vim again. Before this change I sometimes found myself three or four vim instances down and wondering where my editor modifying a file went.

# Functions which do non-trivial configuration which isn't always performed
function connect_to_ssh_agent {
        local SSHPATH="$HOME/.ssh/$HOSTNAME"
        # If we have a remote agent we are already done
        if [ ! -z $SSH_AUTH_SOCK ]; then
            return
        fi

        # If we have a record of starting an agent before, try connecting to it
        mkdir -p $SSHPATH
        if [ -s "$SSHPATH/sa.sh" ]; then
               . "$SSHPATH/sa.sh" >/dev/null 2>&1
                kill -0 "$SSH_AGENT_PID" >/dev/null 2>&1
                if [ $? -eq 1 ]; then
                        # agent is dead
                        rm -f "$SSHPATH/sa.sh"
                fi
        fi
        
        # If all else fails start an agent
        if [ ! -f "$SSHPATH/sa.sh" ]; then
                touch "$SSHPATH/sa.sh"
                chmod 600 "$SSHPATH/sa.sh"
                ssh-agent > "$SSHPATH/sa.sh"
               . "$SSHPATH/sa.sh" >/dev/null 2>&1
        fi
}

This function tries its best to always use an existing and recent ssh-agent when starting a new shell. This is most useful if I reattach to a screen session. Since the previous agent socket would then be invalid any new shells started in the old session wouldn't have a working ssh-agent otherwise.

# Mark the machines which have my fancy keyboard connected most of the time
function use_fancy_keyboard {
        export USE_FANCY_KEYBOARD="USE_FANCY_KEYBOARD"
}

This is just a function for tidiness below.

# Function which updates the settings of some environment variables. Useful when
# using screen and connecting from different machines.
function refresh_env {
        eval `cat ~/.ssh/$HOSTNAME/update_config`
}

# Create a file which can be processed to update various shell environment
# variables which may become out of date, such as DISPLAY when a shell is run in
# screen.
function create_update_config {
        local CONFIG="$HOME/.ssh/$HOSTNAME/update_config"

        # We only want to overwrite this configuration file if we are at the
        # root of a set of shells on this machine. Ie. if this session is in a
        # terminal window or as the result of an ssh login. We do not want to
        # overwrite the configuration if a new window in screen is opened or a
        # shell opened from vim. However, if this isn't the root we'll want to
        # ensure that we read the config file to have the latest settings.
        if [ ! -z $CREATED_UPDATE ]; then
            refresh_env
            return
        fi

        export CREATED_UPDATE=yes

        mkdir -p "$HOME/.ssh/$HOSTNAME"
        echo "export SSH_AUTH_SOCK=$SSH_AUTH_SOCK;" >  $CONFIG
        echo "export DISPLAY=$DISPLAY;"             >> $CONFIG

        if [ -z "$USE_FANCY_KEYBOARD" ]; then
            echo "unset USE_FANCY_KEYBOARD;" >> $CONFIG
        else
            echo "export USE_FANCY_KEYBOARD=$USE_FANCY_KEYBOARD;" >> $CONFIG
        fi

        if [ -z "$LESSKEY" ]; then
            echo "unset LESSKEY;" >> $CONFIG
        else
            echo "export LESSKEY=$LESSKEY;" >> $CONFIG
        fi
}

These two functions operate to that any new shell I start has up to date environment variable concerning things which may change for every remote login, even if the new shell is being started in the context of a previous login as happens when reattaching to a screen session or starting a shell from an editor. Currently this only ensures that my ssh-agent, DISPLAY and keyboard layout are kept up to date. The refresh_env function allows me to update any running shell without starting a new shell.

# OS specific settings
OS=`uname`
case $OS in

This section applies OS specific settings. Usually these are options for different userspaces or different ways of determining if particular hardware is available.

        Darwin)
                export LC_CTYPE="en_US"
                export PATH=/opt/local/bin:/opt/local/sbin:$PATH # MacPorts
                export HOSTNAME=`scutil --get LocalHostName`
                LS_OPTIONS="$LS_OPTIONS -b -G"
                NCPU=`/usr/sbin/sysctl -n hw.ncpu`

                # Detect if my fancy keyboard is connected or not
                if system_profiler SPUSBDataType | grep Kinesis > /dev/null; then
                        use_fancy_keyboard
                fi

                # Version specific changes
                OSXVER=`/usr/bin/defaults read /System/Library/CoreServices/SystemVersion ProductVersion`
                case $OSXVER in
                        10.8.*)
                                unset PROMPT_COMMAND
                                ;;
                esac
                ;;

Settings for MacOSX systems. The only bit of note here is that I check to see if my ergonomic keyboard is attached. Several pieces of software I use have different key mappings depending on what kind of keyboard I am typing on. I don't tend to use MacOSX as an ssh destination so I don't take care to connect my ssh-agent or to check for a local login before my fancy keyboard.

        Linux)
                export HOSTNAME=`hostname`
                _GREP_OPTIONS="${GREP_IGNORE_DIRS} ${_GREP_OPTIONS}"
                LS_OPTIONS="$LS_OPTIONS -T 0 -b --color=auto"
                connect_to_ssh_agent
                NCPU=`grep ^processor /proc/cpuinfo | wc -l`
                if [ $NCPU -eq 0 ]; then NCPU=1; fi
                ;;
        NetBSD)
                export HOSTNAME=`hostname`
                alias __ls='colorls'
                # Currently the only ls option is -h, which isn't supported
                # with colorls
                LS_OPTIONS="-G"
                connect_to_ssh_agent

                # sysctl requires extra permissions on some systems
                #NCPU=`/sbin/sysctl -n hw.ncpu`
                ;;
        FreeBSD)
                NCPU=`/sbin/sysctl -n hw.ncpu`
                ;;
        Solaris)
                #NCPU=`psrinfo | something
                ;;
        *) # Try something reasonable
                export HOSTNAME=`hostname`

These OSes have nothing special about them aside from some different optons supported by a couple of userland tools.

esac

# Linux Distro specific settings
if [ $OS == "Linux" ]; then
        DISTRO=""
        # LSB check (Ubuntu)
        if [ -f /etc/lsb-release ]; then
                DISTRO=`cat /etc/lsb-release | sed -e 's/=/ /'|awk '{print $2}'|head -n 1`
        elif [ -f /etc/debian_version -o -f /etc/debian_release ]; then
                DISTRO="Debian"
        elif [ -f /etc/slackware-version ]; then
                DISTRO="Slackware"
        elif [ -f /etc/gentoo-release ]; then
                DISTRO="Gentoo"
        elif [ -f /etc/redhat-release -o -f /etc/redhat_version ]; then
                DISTRO="Redhat"
        fi

Find the name of a few Linux distributions I use from time to time.

        # Now we switch on the different distros because some of them are quite different
        case $DISTRO in
                Ubuntu)
                        export LC_CTYPE="en_CA.utf8"
                        ;;
                Redhat)
                        # I don't know which Redhat support a new enough Grep
                        _GREP_OPTIONS=$GREP_OPTIONS_DEFAULT
                        ;;
                *)
                        export LC_CTYPE="en_US"
                        ;;
        esac
fi

The comment really says it all and this code adjusts a few things for slight differences between distros.

# Machine specific settings
case $HOSTNAME in
        travis)
                export DITRACK_ROOT="/home/travis/issues/issues"
                export QUEX_PATH="${HOME}/bin/quex-0.53.2"
                ;;
        daredevil)
                # Daredevil has an older version of gnugrep which doesn't support exclude-dir
                _GREP_OPTIONS=${GREP_OPTIONS_DEFAULT}
                export NNTPSERVER=localhost
                ;;
        multivac)
                export QUEX_PATH="${HOME}/bin/quex-0.59.5"
                export SVN_SSH="${HOME}/projects/configurations/subversion/svnssh.sh"
                ;;
        david)
                export USE_TINY_KEYBOARD=USE_TINY_KEYBOARD
                export SVN_SSH="${HOME}/projects/configurations/subversion/svnssh.sh"
                ;;
        tbrown-macbook) # Machine at Mobidia
                export CVSROOT=":pserver:tbrown@MobidiaCVS:2401/MOBIDIACVS"
                ulimit -c unlimited
                ;;
        tbrown3-macbook|tbrown3-vm32) # Machine at Tellabs
                export TELLABS=1
                ;;
        usscrh5bld*) # Build machines at Tellabs
                export PATH="${PATH}:/home/wiccadm/bin/cc:/usr/atria/bin:/net/sunwicc01/export/home/sunwicc01/wicc/tools/bin"
                alias ct='cleartool'
                export TELLABS=1
                # The build system breaks badly if done in parallel
                export MAKEFLAGS=" "
                ;;
        tbrown3-2|wiz|VTd-GAP) # vmbox at Tellabs
                export TELLABS=1
                ;;
        sdf|otaku|benten|faeros|iceland|norge|sverige|ukato) # Machines at SDF
                # These settings come out of the default .profile. At least
                # these are the settings I didn't overwrite.
                export MAIL=/mail/${LOGNAME:?}
                stty erase '^h' echoe
                ;;
        TRAVISB-ARISTA|*.aristanetworks.com)
                # Machines at Arista
                export ARISTA=1
                export P4MERGE=$HOME/configurations/perforce/merge.sh
                export SCREENDIR=$HOME/.screen_sockets
                mkdir -p $HOME/.screen_sockets
                chmod 700 $HOME/.screen_sockets
                ;;
esac

These machine specific configurations aren't that interesting for most people. In fact most of these entries are defunct but I keep them around as examples of how to perform certain types of environment specific configurations without having to reach back into my VCS history.

What might be of interest will be the screen socket configuration in the last machine section. Remember how I said at work I spent time inside chroots in screen sessions? well in order to start a screen session within the chroot you must be in the chroot. This makes storing the screen sockets in /tmp problematic. Since this chroot does allow me access to my home directory I put the screen sockets under there instead so I can access them outside the chroots.

# Set things up using the above configurations.
alias grep='grep --colour=always'
export GREP_OPTIONS=$_GREP_OPTIONS
alias ls='__ls $LS_OPTIONS'
alias df='df -h'
alias du='du -h'
alias free='free -m'
if [ -z "$MAKEFLAGS" ]; then
   export MAKEFLAGS="-j $(( ( ${NCPU} * 5 ) / 4 )) -l $(( ${NCPU} * 3 ))"
fi

This section takes all my default settings, which are modified above for specific OSes and machines, and exports them. Some things, like df using human readable units by default, are equally well supported everywhere and so aren't (yet) factored out.

One thing I do for make is to have the default number of parallel jobs be 5/4 times the number of cores. On one or two core machines this is equal to the number of cores, but on machines with more cores it's a bit higher to ensure that the machine is maximally loaded. Sometimes IO or other delays will result in a small number of these jobs not consuming a full core worth of CPU time. To fix this I oversubscribe the CPUs a bit. I also limit make to not starting too many jobs if the load average is more than three times the number of cores. This is high enough that I'll push out any nice'd processes, but not so many that I make the box unresponsive. These settings really help when compiling large programs on 32 core or greater boxes which have background builds and tests running.

if [ -z "$USE_FANCY_KEYBOARD" ]; then
    export LESSKEY="/dev/nonexistant_file"
fi

If I don't have my fancy keyboard then I want to use the standard QWERTY less key mappings. My .lesskey contains the mappings for my non-standard keyboard.

# Don't overwrite the interactive login info if this isn't interactive. Say if
# we are logged in somewhere and then scp a file over
if [ -n "$PS1" ]; then
    create_update_config
fi

You may not have noticed, but all my bash configuration is in this single file. I find this much more convenient than having to factor it out further into things which are to be used during an interactive shell versus not. Instead I simply skip the few bits which are not appropriate in a non-interactive shell and symlink all the other bash rc files, such as .profile, to this file. This section skips one of those interactive only chunks.

if [ -z "$DARK_TERM" -a -z "$LIGHT_TERM" ]; then
    # Setup a default of light on dark terminals since that's what I use most of
    # the time. Eventually I may get some autodetection working. At least on
    # some platforms. Background/foreground colour detection is a bit of a
    # forgotten corner of Unix terminal interaction. Especially since most
    # terminals claim to be xterm.
    export DARK_TERM="DARK_TERM"
fi

This is the export portion of the nonfunctional terminal colour scheme detection code.

# Second half to the $TERM flag ssh passthrough
alias ssh="TERM=\"\${TERM}:\${USE_FANCY_KEYBOARD},\${USE_TINY_KEYBOARD},\${DARK_TERM},\${LIGHT_TERM}\" ssh"

In order to export my shell flags over ssh I setup this alias to fill the flags into TERM before ssh'ing. This can be slightly annoying if you ssh to an account without my configuration because it won't know about the compound terminal type. I usually have a script which sshes to the correct machine without the compound TERM though so this is rarely an issue. When it is I fix it with and "export TERM=xterm" on the remote end.

# Only set PS1 if there is one already set so that we don't set one in a non-interactive shell

PROMPTHOSTNAME=`echo ${HOSTNAME} | sed 's/\..*$//' | tr '[:upper:]' '[:lower:]'`
PROMPT='${PROMPTHOSTNAME}:$(pwd_len_limited) \[\033[1;37m\]$(is_vim_running_locally)\[\033[0m\]'

if [ -n "$PS1" ]; then
        PS1="${TITLEBAR}${PROMPT} "
fi

The last thing my bashrc does, if running interactive, is to combine my prompt and set it. Some of the machines I work on have uppercase hostname (ick!) so I convert all hostnames to lowercase here. My prompt isn't as ornate as some, but I find it functional. The only piece which I haven't described here are the control characters which make the final character of my prompt bold for an easy visual marker of where my prompt ends and my command begins.

Too Much About Distributed Bug Tracking

Distributed bug tracking is a topic which had a burst of interest in 2008 and then again in 2010. Unfortunately, since then not that much has come of it and there is a lot of misunderstanding. Distributed bug tracking is also a relatively new concept so there are many facets which haven't been fully thought through or which are being re-invented repeatedly. This is an attempt to collect and explain all the major issues and approaches to distributed bug tracking seen in software to the current date. The intention is to serve both as starting point for those looking to use a distributed bug tracker and a summary of the major issues for those considering to write or just understand distributed bug trackers. There is also a comparison of existing software and some possible use cases for the reader interested in using a distributed bug tracker as part or a project.

Before diving in it's important to note that there are actually two definitions of distributed bug tracking competing for the term. The older, which I'll be discussing in detail below, is tracking or distributing bug information in a distributed manner much like you can track and distribute source code using a distributed version control system such as Git or Mercurial. The second definition is distributing bugs between many more traditional centralized bug trackers such as Bugzilla or Jira. I won't cover this latter definition here, but perhaps in a later post as a rise of DVCS-like distributed bug tracking will drastically increase the need for inter-tracker bug synchronization.

Software


Over the past five or six years there have been several distributed bug trackers written which have explored various different aspects of the domain. Most of these have issues ranging from minor through major. Here I've listed all the distributed bug trackers I was able to find in the course of my research into the topic. In a later section I'll go over a matrix of their capabilities and designs.

As you can see there is no lack of early projects exploring distributed bug tracking. Later I'll compare them to each other but first I will discuss the various dimensions and design decisions which go into a distributed bug tracker and are expressed in the above software.


Design Considerations


There are several aspects of distributed bug tracking which have parallels with traditional centralized bug tracking, such as which fields a bug should have, and several which are distinct, such as how bugs are stored relative to branches. This section will discuss only those shared issues which have direct relevance to distributed bug tracking. Issues such as bug priority policy will not be discussed as those don't differ between centralized and distributed bug tracking. Issues unique to distributed bug tracking will also be discussed.


On-Branch, Off-Branch or Out-of-tree


The first issue which comes up when people first ponder distributed bug tracking is where, with respect to the code, the bug database should be stored. There are three common options. The first and most popular is to store the bugs next to the source code in a separate directory in the source VCS. This is attractive because the developers already have that source available and it's easy for the tracker developer because if there is any VCS support required it is limited to basic content tracking commands such as add and commit. Using a VCS also lets the distributed bug tracker developer leverage the existing VCS synchronizing and merging capabilities. Further it allows bug information to follow the code across branches.

This latter ability is one of the great possibilities that distributed bug tracking brings to the table. Large complex projects which have several development and maintenance branches often have difficult or complex ways in which they track whether a particular branch has the fixes for a particular bug or not. The best track which commits fix a particular bug and then leverage the VCS to determine if a particular branch has that change or not. Other systems use multiple bugs or other manually maintained fields to store such information for release and maintenance branches, usually development branches are too much work to cover using these manual systems. In the worst case the source of information is the original developer being asked to examine the branch to see if a particular fix exists there.

Obviously all the lessor traditional approaches have their issues. However even the best traditional method depend heavily on the VCS being able to effectively determine if a change exists on a particular branch across a wide array of obstacles including complex merges, rebases, double commits and changes passed around as patches, which may be manually reapplied. This is a difficult proposition and inevitably the coverage of supported cases will have holes.

On-branch storage also has the advantage of keeping the bug database with the code is that the bug database can follow the code through source tarballs and packages as they are distributed and incorporated into distributions. It is also possible, with greater or lessor merge troubles, to have the bug data follow fixes along in the patches.

The on-branch strategy is not without disadvantages, most of which are trade-offs for the advantages gained. The aforementioned bug data in patches is one such disadvantage. Since the bugs are stored beside the code any diffs or patches will, by default, contain change information related to the bugs as well. This is not always desirable and results in extra work to clean up patches or ignore bug changes. Similarly having the bug status track the code through various branches is a useful feature, but brings about the challenge of producing a summary view across the various release and trunk branches. It is also not immediately obvious where bugs against a particular release version should be filed or how to determine which branches have a fix if any have it at all.

Another alternative is to store the bugs inside the VCS in a separate branch. This approach results in a system which is more similar to the traditional centralized bug tracking paradigm. Designed this way there is only one source of bugs of which any particular copy of the repository will have a more or less up to date version.

Off-branch bug storage solves some of the issues related to on-branch storage, namely issues related to where a bug should be entered, keeping bug data out of patches or diffs and, as will be discussed later, how to get descriptions of bugs onto the branches where the bugs are. Similarly off-branch storage has as its disadvantages many of the advantages of on-branch storage. In particular off-branch storage does nothing to help track the state of a bug on any particular code branch.

Off-branch storage also suffers a few disadvantages of its own. By storing the bugs away from the code in a separate branch extra care must be taken to ensure that the bug branch is propagated. For example, systems such as git don't automatically push and pull branches other than the current one. This can lead to a project being pushed, to Github say, without the bug database being included. As we'll see later this is one aspect which may have contributed to low recommendation scores of some of the existing distributed bug tracking software since it may be the reason several appear to not dogfood themselves.

Off-branch storage will also have difficulty transferring between different version control systems. Though it may feel like everybody uses git all the time, it just isn't true. Unfortunately how branches work differs across VCSes both semantically and with respect to the interface. This can cause limitations with entities, such as Linux distributions, integrating the upstream bug repository.

The least favoured storage method is neither on-branch nor off-branch, but out-of-tree. With out-of-tree the bug database is stored in some other fashion either inside the VCS or using some other external database. One example of this is Fossil which stores the bugs as part of its distributed database, but not really in a separate branch at all. Another example are systems which take advantage of the git-note capabilities. These systems have the advantage of being clean since they don't have the clutter of bug directories or bug branches. Unfortunately that is really the only advantage they have. Storage of this form tends to be tightly integrated with a single VCS and usually even more care must be taken to ensure that the bug databases are propagated and merged correctly then in the off-branch case.

One advantage shared between off-branch and out-of-tree is that they hold the possibility of using custom merge algorithms. If bugs are stored on-branch then they must be merged alongside the source code and thus, for the most part, must use the standard source control merging algorithms. This will constrain the file formats of the bug database to forms which are feasible for basic textual merges to be successful and relatively easy for humans to merge manually when conflicts arise. Off-branch and out-of-tree, in contrast, hold the promise of using custom merging algorithms. This is theoretically possible with off-branch storage, depending on VCS support, and the norm with out-of-tree storage.


File Formats AKA Ease of Merging


Traditional centralized bug tracking has a great freedom in how its data is structured and represented on disk. It is perfectly acceptable to require specialized tools to read the data and the data is optimized to be processed by trusted and properly configured server software, exceptions to this will be performed by prepared system administrators who take the utmost care. Distributed bug tracking has none of these freedoms.

Distributed bug tracking must operate in a world where the tracker doesn't have full control over what happens to their data or who has permissions to change it. As we'll come back to later distributed bug tracking cannot rely on authorization to ensure that only permissible states are entered, instead the best they can do is verification of change before they are integrated into the local bug database. As such one important aspect in the chosen file formats of distributed bug trackers is that they must be difficult to corrupt.

The minority of existing distributed bug trackers have the ability to rely on specialized merging algorithms. Mostly these are out-of-tree based or based upon specialized databases. The rest must at least perform acceptably without the benefit of custom merging code. This is very true of on-branch trackers where the bug changes will pass through the standard code merging algorithms and mostly true for off-branch trackers where the bug branch will likely have at least a few hops where the specialized merge tool is not installed.

The two important aspects of distributed bug database file formats are how well they merge automatically using the standard textual merge tools and, since conflicts are sometimes unavoidable, how easily they can be resolved by humans. Conflicts are unavoidable in all cases because some data about a bug, such as whether it is resolved or not, is semantic and singular. A bug is either declared fixed or not. Consider the case of bug A and a tracking policy which has three possible bug states: New, Diagnosed and Fixed. Suppose Alice, in her branch or repository clone, fixes bug A and marks it as fixed in her copy. Suppose concurrently Bob, in his branch or repository clone, looks at the bug and figures out what's wrong so he marks it as Diagnosed. If Bob later pulls in Alice's changes he will receive a textual conflict related to the bug state. If a custom merge algorithm could be used this wouldn't be an issue since Fixed obviously overrides Diagnosed.

Though the above case could be solved by a custom merge algorithm there are cases where it is not clear that any algorithm can always make the correct merge. Consider the case of the customer severity of a bug. Alice may mark a bug as Minor because it only affect two or three customers. Bob might, however, mark it as Critical because one of those few customers is the biggest customer the company has. No mere computer could ever have all the relevant information to always make the correct choice.

With these two aspects in mind there are several different file formats which have seen use in the software I've found. These can be divided to cover the span of two dimensions. The first dimension is the format of each file and the second dimension is what is contained within these files. Out-of-tree storage designs won't be covered here since they tend to demand custom merging utilities anyways and be based upon more complex databases.

The most common file format appears to be a simple markup. Simple markups rate highly for ease of human resolution since there isn't a finicky file format to worry about. They tend to be rather inflexible and difficult to code for however. Most of the formats in this class are usually too simple to have a name or look much like the INI format.

The second most popular format seems to be a hierarchical markup akin to YAML. This differs from a simple markup in that the format is more complicated, but also more flexible. While these formats don't rate badly in terms of human conflict resolution there is a risk of a missing significant character causing issues.

The least popular appears to be full serialization formats such as JSON or XML. Unless pretty printed these are nearly impossible to manually resolve. With pretty printing these serialization formats tend to be merely error prone and tedious. One technique I have seen is to use JSON with each data element separated from any other via five or six newlines. The intent here is to reduce the possibility of a merge conflict by removing the other data in the JSON file from the context of the merge.

The file format chosen is perhaps the greatest determiner of how often automatic merging will be successful and how much pain the human will have to suffer when automatic merging fails. From this perspective alone the simple markup seems the best which is possible. Since they tend to be one statement per line formats and have minimal grammar requirements automatic merges tend to corrupt these formats the least and they are the easiest, especially when the lines in the file are in a fixed order and produce nice diffs, for a human to manually merge.

There are also three major ways to arrange the storage of bugs among a number of files. The simplest from a file layout point of view is to store the entire bug database in a single file. This has advantages of efficiency, speed and ease of coding. As a disadvantage every change to any bug will modify this file thus ensuring that it will have to be merged constantly. This option is not used it many of the existing distributed bug trackers.

The most popular file layout appears to be one file per bug. This has the advantage of reducing conflicts since it is less likely that two developers will modify the same bug than two bugs in the same database. If the tracker restricts itself to singular semantic data only, such as bug state, then this can work well since any concurrent changes the data would have to be manually merged in any case. If the tracker supports things like bug comments then this format is still open to frequent file merges as different people comment on the same bug at different times. Unfortunately bug comments in a single file will cause frequent merge conflicts until the number of existing comments becomes sufficiently large. At that point it is possible to place new comments into the file randomly to give the automatic merge the best possibility of success. Most bugs to not accumulate more than a dozen comments however.

The final common layout is to use (almost) immutable objects. In this scenario each issue has a number of files. All or most files will be immutable. One way to accomplish this is to put each comment into a separate, immutable file and give each bug one small mutable file which contains the singular semantic data. Since concurrent comments are common and, in principle, easily merged automatically the comments would be trouble free. Since singular data is impossible to automatically merge in all cases the file being mutable gives the human the full power of their VCS to help them determine the correct semantic resolution. An alternative is to use fully immutable objects and a log-like structure where newer objects override older objects. Such a system is capable of always merging automatically, but when the merges are incorrect, as in the Fixed/Diagnosed example above, the human is left with minimal tools to determine the correct resolution or even receive any indication that a conflict occurred which requires their attention.

In allowing the maximum number of successful automatic merges and immediately bringing semantic conflicts to the user's attention the mostly immutable object method appears to be the superior method. Successful automatic merges have much less friction than the alternative which is important to support the adoption of distributed bug tracking.

In summary it would seem, at this time, that a series of mostly immutable object in a simple markup format would be the best available choice for the backend bug storage format.


Process Automation


Centralized bug trackers tend to support process automation. Process automation is the ability of the bug tracker to ensure that a bug goes from New to Assigned to Resolved and, being assigned back to the reporter, to Closed. Many projects use this to implement complex bug life cycles and bug handling processes. Distributed bug trackers don't have the luxury of supporting this feature in a reliable manner. There are two central reasons for this.

The first is that while centralized bug trackers operate on centralized and controlled servers, distributed bug trackers run on the developers' own machine. The developer won't be able to short circuit the twelve step bug process on the server, but if they are aggravated enough they'll disable the process enforcement code on their own copies of the repository. With no way to trust that every step has been performed in an allowable order the only way to confirm the process has been followed is to verify after the fact.

Unfortunately this verification comes with its own problems, even if the developers follow the process locally. Merging state between concurrent modifications can, depending on the complexity of the bug process, result in invalid or at least ambiguous states. Merging the output state of two identical but independently run state machines is not guaranteed to result in a valid state of the state machine. It is possible to verify that a valid state has been reached as the result of a merge, but that will involve manual resolution, often of a frustratingly tedious nature. Merging of bugs makes it difficult to maintain a verified bug state since the transitions cannot necessarily be observed.

In the end it seems that bug tracker automation will either be done mostly with wrapper tools or VCS hooks. As with DVCS hooks versus CVCS hooks I believe we'll find that distributed bug tracking results in the adoption of less stringent processes and additional trust put into the users of the bug database because only after the fact can hooks be executed at a canonical repository.


Comments, Attachments and Fields


The oldest form of distributed bug tracking is a TODO file committed beside the code. This is usually a simple list of tasks or bugs to be fixed, perhaps with a single brief comment explaining the issue in detail. This is the simplest form of bug tracking, just a list of titles, maybe with a description. At the other end there are massively complex centralized systems with bug processes, multiple comments, attachments and more fields, both free form and constrained, than you can shake a stick at.

Distributed bug tracking covers this entire range. Simple TODO lists are not very interesting because they are simple and quite limited, massively complex systems are unlikely to succeed as distributed bug trackers for the reasons described in the previous section. Most interesting is the middle ground along the lines of a basic Bugzilla installation. Such a bug tracker support a handful of useful fields: severity, component, state, owner, etc. They also support comments on bugs and attachments on bugs. Systems of these moderate complexities are commonly found in open source applications and smaller corporations.

The handling of the metadata fields is not terribly complex. These are the singular semantic data concerning a bug which computers will find difficult to correctly merge in all situations. Having a large number of these is not an engineering challenge, but beyond some number it will strain the patience of the developer and be ignored. A large number of metadata fields may also not be as useful in distributed bug tracking as centralized bug tracking. Since relational database formats are troublesome when it comes to distributed bug storage many of them use less structured file formats, thus running arbitrarily complex queries on the bug database is cumbersome, often requiring parsing hundreds or thousands of files into memory before checking each record in a loop. This is more difficult than simply using existing text processing tools to run regex queries on the database. If a tool like grep is used then there is no point in having a field for every possible situation since all the comments will be searched anyways. This being the case I believe that only the most useful of fields will be formalized with any other data being put into a structured form appropriate for the project and placed into comments.

The issue of attachments is also not complicated other than the fact that most existing distributed bug trackers ignore this feature entirely. This is likely just an oversight due to the relative immaturity of the field. Attachments play an important role in the operation of a bug tracker by being able to store data that is too large to fit conveniently into a comment. Examples of this include logs or configuration files.

Comments in a bug tracker are a critical collaboration feature. Comments allow one developer to communicate through time to either themselves, users, watchers or other developers. It provides an organized area to maintain comments and investigation notes concerning a bug.

One particular issue related to comments and distributed bug tracking is comment order. Many bug trackers use a flat comment model where comments are made in a linear order. In a centralized model this works well since there is a definite order to the comments and there are only small windows where a comment can be posted while another is being prepared. In fact, many bug trackers detect this situation and prevent submitting the latter comment until the user has read the former comment. This is a form of real time merging. Because a consistent linear order is maintained in the views of different users the comments can constructively reference each other. However, in a distributed world concurrent comments will be the norm rather than the exception and not until much after the fact will it be possible to determine a canonical comment ordering.

It is not an insurmountable challenge to make flat comments work in a distributed world, but it is also not clear that it is the best way. One alternative is to work along the lines of email, where you respond to particular comments in a tree. This can then be displayed in a nested fashion this makes which comments are replying to which parent comments clear. Perhaps this might ease the difficulties of creating a consistent canonical ordering. One particular additional requirement of a nested presentation might be the necessity to show the user which comments are new when they revisit a bug thread. None of the trackers I investigated appear to support this at the moment.


User Interfaces


The most popular bug tracker user interfaces are web interfaces. A web interface is convenient for centralized trackers because it is graphical in nature and has an easy communication path from the centrally controlled web server to the centrally controlled database server, often the same machine. The web interface also provides realtime feedback. The less common, but nonetheless effective, interface types often seen are CLI interfaces, email interfaces and GUI interfaces. Often these are used in concert with a web interface.

Of these there is no intrinsic argument against any but email interfaces. It is too burdensome to expect a developer to always have local email configured and to integrate every project or branch of a project into such a system. Most of the distributed bug trackers offer a CLI interface. This is a popular option because most interactions with a bug tracker during development are changing the state or commenting on a particular bug. For these purposes a CLI is more than adequate. CLI interfaces also have the great advantage of fitting well with the other CLI development tools such as VCSes, editors, build systems and test runners. CLI interfaces are also easy to script which allows developers to automate or integrate the tracker with other tools, such as their editor.

In general CLI interfaces are very convenient for the developer who is working on a bug. They are less convenient if the developer has to wade through a list of bugs to find a particular bug or otherwise navigate a large amount of data. CLI interfaces are also entirely inappropriate for users of a project. It is unreasonable to expect a user to checkout the source repository and use a CLI to see whats bugs a project has or the current state of their particular bug.

Many of the disadvantages of the CLI interface could be ameliorated with a curses interface to allow interactive navigation and modification of issues. However this would still limit the interface to textual information. A related approach which offers additional flexibility is the support a local web browser interface. If this interface has reasonable support for terminal browsers then the effect can be almost as good as a dedicated curses interface with the advantage of supporting GUI browsers with all the niceties that entails.

Distributed bug tracking brings one additional wrinkle to a web browser interface. If the bug tracker is running locally and stores its database near the source code then having multiple concurrent users against one instance brings numerous difficulties. Among these are handling commit attribution avoiding conflicts. VCSes provide tools to do this when one user uses a checkout at a time, but tend to provide no help on a finer level. The result of this is that any bug tracker intending to support this will likely end up reimplementing much of the isolation support of formal databases. Since this isn't required in the common case of a single developer working on their own checkout this seems to be wasted effort.

Thus many distributed bug trackers will have two web interfaces if they have any. One will be for local use and one will be for public use. I am not aware of any existing distributed bug tracker which provides a read-write public web interface and stores the bugs either on-branch or off-branch, but there are a several which have a readonly public interface. In a later section I will discuss possible ways in which this can be made to work when interacting with the public. If one is to write a read-write public web interface there are several design issues which need to be thought through first.

The first of these is how to get bug changes from the webserver to the source repository. A traditional centralized bug tracker stores its bug repository in a mutable database. This allows data to be deleted at will. Additionally the integrity of a separate database bug database is usually not considered as critical as the project's source repository. Thus if a malicious user comes along and fills the centralized tracker up with hundreds of megabytes of bugs the effects are relatively minor and a system administrator can easily delete the greater portion of the mess. If a distributed bug tracker stores its database in the VCS then it may not be possible to permanently delete junk data. It could be made to not appear in recent versions, but would still exist in the immutable history. A rapid increase in size could also cause severe problems as a source checkout which was less than a megabyte suddenly turns into one several gigabytes in size. Even if the checkout size later decreases back to the original size after a cleanup.

The second is related to the interface for resolving conflicts between the public interface and the canonical bug repository. Since distributed bug tracking is distributed many bug changes can happen concurrently only to be merged later. This is handled using VCS capabilities in the developer case, but it is likely that using a VCS backend to a public web interface would be cumbersome or be used differently because of the possibility of many concurrent public users. If VCS help isn't possible in the same way as the developer use case then a separate tool might need to be provided to pick and choose which public changes pass moderation.

The rest of the major issues in considering a read-write public interface are those of any other public web site with user generated content and won't be covered here.

One possible solution to this is to have some sort of staging system where the new data from the public interface for the bug repository is manually vetted before inclusion the permanent copy of the bug database. This moderation would need to either be performed frequently or have the unmoderated modifications appear on the public tracker immediately to ensure the public users receive timely interface feedback.

Though there is no specific reason a dedicated GUI interface could not be written none of the major software described does so. This is likely partially due to the effort required compared to a web interface or CLI. Modern web technologies coupled with a single user web server would seem to provide nearly all the advantages of a GUI with significantly better portability and reduced development effort.


Bug Identification


As with the change from centralized VCSes to DVCSes global identification is a tricky subject. It is undeniable that the traditional linear numbering of bugs is an obvious method, where possible, and easier to remember when the numbers are small. Unfortunately such a system cannot be globally unique in a distributed world.

As with DVCSes there appears to be no alternative to random or pseudo-random identifiers, such as cryptographic hashes. This has proven to not be overly burdensome in practice as long as the tracker attempts to disambiguate hashes from a subset of the full string. For example the tracker should be able to determine that the bug identified by a8d82 is actually a8d82ff764188578 as long as the shorter prefix isn't shared by more than one full hash.

There are several methods that can meet this need, but the most common are encoded UUIDs and taking a cryptographic hash of the contents. Obviously these should be presented in a human readable format such as base-64 or hexadecimal. Base-64 has the advantage of being a denser representation, but it suffers from using most of the keyboard characters and both upper and lower case letters. This latter can cause trouble with typing correctly or going through some systems which may stomp on the case. Hexadecimal, on the other hand, is slightly less dense, but doesn't suffer from the case problem. Also, since it uses a limited set of characters it is easier to include in identifiers in other systems, such as version codes.

It seems that there is room for an encoding of the pseudo-random hashes which is both denser than hexadecimal and yet avoids the major issues of base-64. Perhaps something like base-36 (0-9a-z) would fit the bill, though some of those characters may be difficult to type on some keyboards in some languages.

The inability to support linear numerical identifiers would seem to be a severe disadvantage. For projects with a small number of bugs, less than one or two thousand say, this is definitely the case. Beyond that number however the situation is less clear cut. When the number of required digits in an identifier is greater then four or five or the new bug rate is more than a handful per day, then the numbers themselves become more difficult to remember and lose meaning. On many large projects bug IDs are copy and pasted anyways since they are difficult to remember and easy to mistype. A similar situation appears to have won out in the DVCS world, large project will have millions of commits and in such a situation a linear numbering scheme can be no easier to tell apart than the pseudo-random hashes which replaced them.


Software Comparison


Above I've listed all the distributed bug tracking software, both defunct and active, I could find. In this section I will compare them briefly. First I will make any notes about the software and then I will have a summary table of the major aspects. Most of the aspects which are specific to distributed bug tracking have been discussed and explained above. After all the software has been described individually I will compare the most usable software in a table.


Artemis


Artemis is a basic tracker built as a Mercurial extension. It has pretty complete filtering options including the ability to store custom filters.

Last commit/release: Feb 2012
Language/Runtime: Python / Mercurial plugin
Bug storage: On-branch
Dog food: Yes
CLI: Yes
Local Web UI: No
Public Web UI: No
GUI: No
File format: Maildir per issue
VCSes: Mercurial
Custom Fields:
Comments: Nested
Attachments: Yes
BugID: Hash
Multiuser: No
Bug Dependencies: No

















b


b is another Mercurial extension with a simpler model than Artemis. Note that the last release is quite old, but the development tree has activity as of late last year. b is based off the t extension but adapted to provide for more bug tracker-like use cases. b doesn't provide a public website itself, but the hgsite extension will take a b bug database and produce a simple static website from it.

Last commit/release: Oct 2012
Language/Runtime: Python / Mercurial plugin
Bug storage: On-branch
Dog food: Yes
CLI: Yes
Local Web UI: No
Public Web UI: Readonly via hgsite
GUI: No
File format: Sectioned text fields
VCSes: Mercurial
Custom Fields: No
Comments: Yes
Attachments: No
BugID: Hash
Multiuser: Yes
Bug Dependencies: No

















Bugs Everywhere


Bugs Everywhere is likely the most mature of the distributed bug trackers. It has a reasonably active user base and seems to have most of the features to be expected of a distributed bug tracker. The project has had multiple contributors and is currently on its third maintainer since 2005. Bugs Everywhere additionally has an email interface, which is rare among distributed bug trackers.

Last commit/release: March 2013
Language/Runtime: Python
Bug storage: On-branch
Dog food: Yes
CLI: Yes
Local Web UI: Yes
Public Web UI: Readonly
GUI: No
File format: JSON, one file per comment
VCSes: Arch, Bazaar, Darcs, Git, Mercurial, Monotone, Others possible
Custom Fields: No?
Comments: Yes
Attachments: Yes
BugID: UUID
Multiuser: Yes
Bug Dependencies: Yes

















cil


cil is another small CLI only distributed bug tracker. It provides some basic integration with Git, but can also be used with other VCSes as long as you are willing to add and commit changes to the bug repository manually.

cil uses a unique bug repository format where every issue and comment have a file inside a single directory. Each issue and comment has a link to it's children or parent. Thus adding a comment may cause a merge conflict in the issue file if another comment was added concurrently, but it will be restricted to references.

Last commit/release: Oct 2011
Language/Runtime: Perl
Bug storage: On-branch
Dog food: Yes
CLI: Yes
Local Web UI: No
Public Web UI: No
GUI: No
File format: Simple key-value-freeform markup
VCSes: Git-supported but not required
Custom Fields: No
Comments: Yes
Attachments: Yes
BugID: Hash
Multiuser: Yes
Bug Dependencies: Yes

















DisTract


DisTract is one of the older distributed bug trackers, but seem to have fallen off the Internet. You can find the last copy of the site at Archive.org. DisTract is interesting in that it doesn't provide a CLI interface, but instead all the bug interactions are performed from within a page in Firefox (not any other browser) which uses Javascript to access the filesystem directly. Unfortunately since I have been unable to find any copies of DisTract not a lot is known about it.

From the archived website it was clear that the author intended to have a bug specific merge algorithm, though it seems unlikely that ever came to pass.

A bug tracker which didn't make my list because it requires realtime access to a central repository but takes a similar implementation view is Artifacts for Web. The bug tracker runs locally in the browser but all the bug storage happens on a central SVN server directly.

Last commit/release: mid-2007
Language/Runtime: Haskell / Javascript / Firefox
Bug storage: ?
Dog food: Yes
CLI: No
Local Web UI: Yes
Public Web UI: ?
GUI: No
File format: JSON?
VCSes: Monotone
Custom Fields: ?
Comments: ?
Attachments: ?
BugID: ?
Multiuser: ?
Bug Dependencies: ?

















DITrack


DITrack is the first off-branch distributed bug tracker in this list. Ditrack is interesting in that it only support SVN, a centralized VCS. As such several of its design features are rare. The first is a linear bug ID scheme, bugs are numbered in sequential order. Each issue is a directory made up of multiple files. Each file is numbered in sequence but seems immutable. Thus each issue is the sum of the log-type entries from the files. While sequential numbering has obvious problems in a decentralized system the log structure does present an interesting solution to the merging problem. Since the bug is the combined last state of various fields from the log there need never been any manual merging since regular file merging will always automatically result in a last-wins bug metadata merging strategy.

Last commit/release: Aug 2008
Language/Runtime: Python
Bug storage: Off-branch
Dog food: Yes
CLI: Yes
Local Web UI: No
Public Web UI: Read-only
GUI: No
File format: RFC-822
VCSes: SVN
Custom Fields: No
Comments: Yes
Attachments: Yes
BugID: Linear
Multiuser: Yes
Bug Dependencies: No

















dits


dits appears to be the aborted beginnings of a distributed bug tracker. Its functionality isn't very complete and it doesn't appear usable.

Last commit/release: Apr 2010
Language/Runtime: Python
Bug storage: On-branch
Dog food: Yes
CLI: No
Local Web UI: Yes
Public Web UI: No
GUI: No
File format: JSON
VCSes: HG, Git?
Custom Fields: No
Comments: No
Attachments: No
BugID: Hash
Multiuser: No
Bug Dependencies: No

















Ditz


Ditz is a distributed bug tracker which was, at one time, fairly popular as distributed bug trackers go in the Ruby community. Now it seems to be abandoned though several people have created personal forks on gitorious. Ditz has no native support for any particular VCS, but it does have a plugin system which has been used to integrate with Git. Of interest, especially to Emacs users, is that Ditz has an accompanying Emacs major mode. Ditz has a particular focus on grouping issues into releases.

There appears to be a local web UI "Sheila" but I am unsure of it's usability state.

Last commit/release: Sept 2011
Language/Runtime: Ruby
Bug storage: On-branch
Dog food: Yes
CLI: Yes
Local Web UI: Yes
Public Web UI: Read-only
GUI: ditz-commander
File format: YAML
VCSes: Agnostic
Custom Fields: No
Comments: Yes
Attachments: No
BugID: Hash
Multiuser: With plugin
Bug Dependencies: No

















Fossil


Fossil is not just a distributed bug tracker, but an entire development forge in a box. It includes a DVCS, wiki, bug tracker and web server. One might call it a distributed forge. Since Fossil stores the tickets in its distributed database there is a custom merging algorithm which is mostly apparently newest-wins, but avoids any manual merging of bug files.

Last commit/release: Apr 2013
Language/Runtime: C
Bug storage: Out-of-tree
Dog food: Yes
CLI: Yes
Local Web UI: Yes
Public Web UI: Read-write
GUI: No
File format: Database
VCSes: Fossil
Custom Fields: Yes
Comments: Yes
Attachments: Yes
BugID: UUID
Multiuser: Yes, but no ownership
Bug Dependencies: No
















UPDATE 2013-06-03: As C2H5OH mentioned in the comments it is possible to add custom fields.


git-case


git-case is a bare bones proof of concept distributed bug tracker built in the style of the git porcelain. The website claims that some operations are sluggish, but no further details are given.

Last commit/release: Oct 2010
Language/Runtime: Bash
Bug storage: Off-branch
Dog food: No
CLI: Yes
Local Web UI: No
Public Web UI: No
GUI: No
File format: Plain text
VCSes: Git
Custom Fields: Yes
Comments: Yes
Attachments: Yes
BugID: Hash
Multiuser: Yes
Bug Dependencies: No

















git-issues


git-issues is a mostly defunct tracker built on top of Git in a similar manor to git-case.

Last commit/release: June 2012
Language/Runtime: Python
Bug storage: Off-branch
Dog food: No
CLI: Yes
Local Web UI: No
Public Web UI: No
GUI: No
File format: XML
VCSes: Git
Custom Fields: No
Comments: Yes
Attachments: Yes
BugID: Hash
Multiuser: Yes
Bug Dependencies: No

















gitissius


gitissius started off as a fork of git-issues, but then diverged significantly.

Last commit/release: Dec 2011
Language/Runtime: Python
Bug storage: Off-branch
Dog food: Yes
CLI: Yes
Local Web UI: No
Public Web UI: No
GUI: No
File format: JSON
VCSes: Git
Custom Fields: No
Comments: Yes
Attachments: No
BugID: Hash
Multiuser: Yes
Bug Dependencies: No

















gitli


gitli is really more of a single user TODO list than a fully fledged distributed bug tracker. All the issues are contained within a single file. With that setup, linear BugIDs and no comments this isn't really suitable for any except the simplest of project needs.

Last commit/release: March 2011
Language/Runtime: Python
Bug storage: On-branch
Dog food: Yes
CLI: Yes
Local Web UI: No
Public Web UI: No
GUI: No
File format: Custom text
VCSes: Git
Custom Fields: No
Comments: No
Attachments: No
BugID: Linear
Multiuser: No
Bug Dependencies: No

















gitstick


gitstick is apparently based upon Ticgit. This seems to be a young distributed bug tracker yet. Unfortunately I wasn't able to determine much information about how this project operates from inspection. It may be less appropriate to call this a standalone distributed bug tracker than a local web UI for Ticgit.

Last commit/release: Jan 2013
Language/Runtime: Scala
Bug storage: Off-branch
Dog food: No
CLI: No
Local Web UI: Yes
Public Web UI: No
GUI: No
File format: ?
VCSes: Git
Custom Fields: ?
Comments: ?
Attachments: ?
BugID: ?
Multiuser: Yes
Bug Dependencies: No

















klog


klog appears to be greatly in flux at this time so it is difficult to say much which is likely to be accurate in a year. There appears to be a great many features planned, but only the most basic features are implemented. According to the bug database a complete rework of the way the bug database is stored is planned.

Last commit/release: Mar 2013
Language/Runtime: Javascript
Bug storage: On-branch
Dog food: Yes
CLI: Yes
Local Web UI: Prototype?
Public Web UI: No?
GUI: Mac OSX
File format: Key-value-text
VCSes: Agnostic
Custom Fields: No
Comments: No
Attachments: No
BugID: Hash
Multiuser: No
Bug Dependencies: No

















Mercurial Bugtracker Extension


Mercurial Bugtracker Extension uses an unusual layout for bugs. There is one directory for open bugs and another for closed bugs. Such a layout may cause issues when there are concurrent modifications such as one person modifying an open bug and another closing it.

Last commit/release: May 2012
Language/Runtime: Python / Mercurial plugin
Bug storage: On-branch
Dog food: Yes
CLI: Yes
Local Web UI: No
Public Web UI: No
GUI: No
File format: INI
VCSes: Mercurial
Custom Fields: No
Comments: No
Attachments: No
BugID: Hash
Multiuser: Yes
Bug Dependencies: No

















milli


milli seems to have disappeared during the lengthy research period so no further information is available.

Last commit/release: ?
Language/Runtime: ?
Bug storage: ?
Dog food: ?
CLI: ?
Local Web UI: ?
Public Web UI: ?
GUI: ?
File format: ?
VCSes: Agnostic
Custom Fields: ?
Comments: ?
Attachments: ?
BugID: ?
Multiuser: ?
Bug Dependencies: ?

















Nitpick


Disclosure: Nitpick is written by the author.

Nitpick is a relatively young distributed bug tracker with most of the significant features discussed in this article. One notable feature of Nitpick not present in other distributed bug trackers is the ability to combine multiple Nitpick databases, via the foreign project feature, into a single view. This allows viewing both bugs across several project and across several branches in a single instance of Nitpick.

Last commit/release: Apr 2013
Language/Runtime: Python
Bug storage: On-branch
Dog food: Yes
CLI: Yes
Local Web UI: Yes
Public Web UI: Read-only
GUI: No
File format: Simple markup
VCSes: git, hg, svn
Custom Fields: No
Comments: Nested
Attachments: Yes
BugID: Hash
Multiuser: Yes
Bug Dependencies: Yes

















pitz


pitz started off as a reimplementation of Ditz.

Last commit/release: Aug 2012
Language/Runtime: Python
Bug storage: On-branch
Dog food: Yes
CLI: Yes
Local Web UI: No
Public Web UI: No
GUI: No
File format: YAML
VCSes: Agnostic
Custom Fields: No
Comments: Yes
Attachments: Yes
BugID: UUID
Multiuser: Yes?
Bug Dependencies: No

















scm-bug


scm-bug is not a standalone distributed bug tracker. Instead it ties source code to an existing bug tracker. It might be possible to use this with a locally installed tracker in a distributed fashion.

Last commit/release: Feb 2011
Language/Runtime: Perl
Bug storage: Out-of-tree
Dog food: ?
CLI: ?
Local Web UI: ?
Public Web UI: ?
GUI: ?
File format: ?
VCSes: svn, git, cvs, hg
Custom Fields: ?
Comments: ?
Attachments: ?
BugID: ?
Multiuser: ?
Bug Dependencies: ?

















Simple Defects


Simple Defects is more than just a distributed bug tracker, it is also capable of synchronizing bidirectionally with several centralized bug trackers. SD uses a distributed database instead of storing the bug repository alongside the source code in a VCS. As such the VCS support it does have is mostly limited to adding commands to the VCS command it self. Since SD is capable of synchronizing bugs in multiple ways it might be possible to use it as an intermediate step between a central project bug tracker and a locally installed centralized bug tracker for developer use.

Last commit/release: Sept 2012
Language/Runtime: Perl
Bug storage: Out-of-tree
Dog food: Yes
CLI: Yes
Local Web UI: Yes
Public Web UI: No
GUI: No
File format: Database
VCSes: git, darcs and other
Custom Fields: ?
Comments: Yes
Attachments: Yes
BugID: Linear
Multiuser: Yes
Bug Dependencies: ?

















Stick


Stick is another one of those distributed bug trackers which seems to have fallen of the Internet. I'm unable to retrieve the source to get much concrete information but the website makes it seem as if Stick was mostly in the idea conception phase with little actual working functionality.

Last commit/release: ?
Language/Runtime: ?
Bug storage: ?
Dog food: ?
CLI: ?
Local Web UI: ?
Public Web UI: ?
GUI: ?
File format: ?
VCSes: Git
Custom Fields: ?
Comments: ?
Attachments: ?
BugID: Hash
Multiuser: No
Bug Dependencies: ?

















ticgit-ng


ticgit-ng does dogfood itself, but that isn't evident from the main repository. I had to search through some forks on Github to find the bug branch. Ticgit-ng uses an interesting approach to managing the data by having a single 'file' per field. Thus there is a file for the state and one for each comment. Not evident in the feature summary is that Ticgit-ng supports tagging issues, though it isn't clear if it supports multiple tags or only one.

Last commit/release: Oct 2012
Language/Runtime: Ruby
Bug storage: Off-branch
Dog food: Yes
CLI: Yes
Local Web UI: Yes
Public Web UI: No
GUI: No
File format: Plain text
VCSes: git
Custom Fields: No
Comments: Yes
Attachments: Yes
BugID: Hash
Multiuser: Yes
Bug Dependencies: No

















Veracity


Veracity is another distributed forge in that it is not only a distributed bug tracker, but also wiki and source control. Again the bugs are stored in a distributed database which has some special logic and interfaces for helping merging along.

Last commit/release: Mar 2013
Language/Runtime: C
Bug storage: Out-of-tree
Dog food: Yes
CLI: No
Local Web UI: Yes
Public Web UI: No?
GUI: No
File format: Database
VCSes: Veracity
Custom Fields: No?
Comments: Yes
Attachments: Yes
BugID: Linear
Multiuser: Yes
Bug Dependencies: No

















Summary Table


Only what I consider to be fully fledged distributed bug trackers worth considering for use in a project of more than one developer will find a place in this summary table for reasons of space. All the same information is available for every tracker I evaluated in their respective sections. The primary determinant of suitability is multi-user support (the ability to assign bugs to users and to determine who made any particular comment or bug report), a sufficiently recent commit or release and what appeared to be at least one mature interface for developer use. The range on project complexity these trackers are suitable for varies, but since small two man project should use a bug tracker just as large projects should I list trackers of multiple complexities.

To save space I have skipped the fields for the GUI (since none of the selected trackers have a GUI), support for custom fields (again since none of them appear to have such support) and multiuser support since that was one of the requirements and all have some such support. It is important to note that all but Fossil have full multi-user support out of the box. Fossil lacks the ability to assign a bug to a particular person for resolution, but it can be added as a set of custom fields.

Comparison part 1

Software Last Commit / Release Language Bug Storage Dogfood CLI Local Web UI Public Web UI
b Oct 2012 Python On-branch Yes Yes No Read only
Bugs Everywhere Mar 2013 Python On-branch Yes Yes Yes Read only
Fossil Apr 2013 C Out-of-tree Yes Yes Yes Read-write
git-issues June 2012 Python Off-branch No Yes No No
Mercurial Bugtracker Extension May 2012 Python On-branch Yes Yes No No
Nitpick Apr 2013 Python On-branch Yes Yes Yes Read only
Simple Defects Sept 2012 Perl Out-of-tree Yes Yes Yes No
ticgit-ng Oct 2012 Ruby Off-branch Yes Yes Yes No
Veracity Mar 2013 C Out-of-tree Yes No Yes No?

Comparison part 2

Software File format VCSes Comments Attachments BugID Bug Dependencies
b Sectioned text hg Yes No Hash No
Bugs Everywhere JSON Many Yes Yes UUID Yes
Fossil Database Fossil Yes Yes UUID No
git-issues XML git Yes Yes Hash No
Mercurial Bugtracker Extension INI hg No No Hash No
Nitpick Simple Markup svn git hg Nested Yes Hash Yes
Simple Defects Database git darcs other Yes Yes Linear ?
ticgit-ng Plain text git Yes Yes Hash No
Veracity Database Veracity Yes Yes Linear No


Bug Handling Strategies


By analogy with DVCSes distributed bug tracking provides some new capabilities, make some older techniques easier and make some traditional centralized bug tracking methods all but impossible. In this section I'll try to cover the most common of these cases and some way to work within the limits, tighter and looser, which distributed bug tracking software as it exists today provides.


Distributed Use Cases


Most of the talk around distributed bug tracking is about replacing a centralized bug tracker completely. This is so for the obvious reason that most developers don't want more than one bug tracker per project. There are, however, some interesting alternative uses which are not in direct conflict with a centralized tracker. One such use is the aggregation of multiple trackers into a single one. Consider the case of a developer who works on several different projects, If these projects don't share one bug tracker then the developer must regularly check these separate trackers. Some of the distributed bug trackers described above support bidirectional communication with other bug trackers, centralized or not. As such the developer could configure a local distributed bug tracker to give an overview of several trackers.

An alternative is a hybrid centralized-decentralized setup, similar to how DVCS is used in practice in many cases. If the project or organization has a single centralized tracker a developer could setup a distributed tracker as a mirror, full or partial, of the centralized tacker for personal consumption and modification when they are disconnected or operating over a poor network link. Whenever it is convenient they would then trigger a bidirectional synchronization. Thus they gain all the advantages of distributed bug tracking without many of the disadvantages. This model is similar to individual developers who use git as an interface to a Perforce or Subversion repository.

Yet another use case, again depending on aggregation, is to combine various bug trackers for a single project. As an example consider the case where the bug trackers of an open source package and bugs against that package in the trackers of all the major Linux distributions could be combined for a more complete view of the issues users are having with the software.

The various ways a distributed bug tracker could be used is not fully explored so these are just a few examples of how they could be integrated into workflows.


Non-Developer Members


One common concern with large projects moving to distributed bug tracking is how to integrate QA and project managers. The predominant view among existing large projects is that QA and project managers, for the most part, should neither need nor have access to the VCS. Bringing this stance to distributed bug tracking would imply that QA would have no way to directly interact with the bug tracker in other than a readonly fashion. The solution to this predicament is to give those QA and project managers read-write access to the VCS.

There are a few reasons such a move is resisted. Many of them are obsolete or misinformed notions based upon limitations of old VCSes or poor bug trackers. The first of these is an entirely valid argument that requiring all the QA and project managers to become experts in the VCS of choice is overly onerous. At more than one place I have seen the local VCS expert setting up special wrappers to perform only the limited set of functionality a particular artist or QA needed to get their job done and hid all the other complexity. In a similar vein any good distributed bug tracker will provide a sufficiently simple interface to the VCS for the bug operations that minimal training should be necessary.

A second common claim, especially among open source projects, is that the VCS is for source code only and everything else should be kept separate. While it is possible to have a parallel VCS repository, or some other arrangement as will be discussed below, modern VCSes are not simply source control systems, but generalized version control systems. Though some VCSes handle them less well than others, many large projects have good success storing large assets or even build chain tools into the VCS alongside the project. As such there is no reason not to also store the bug database and all the input from the QA people as well. The VCS can be viewed as the project state and not just the project output.

A final possible complaint is that the QA people may, not being VCS experts, make disastrous mistakes relating to merging or reviving stale commits or just editing the other parts of the project inadvertently. While this is true when no protections are put in place, most VCSes provide the ability to either restrict different users to different portions of the checkout tree or otherwise have a knowledgeable person double check their changes before accepting them into the main development repository.

Integrating QA, project managers and other non-developers such that they can make full use of the distributed bug tracker is not a difficult matter, it merely requires that sufficient training and protections be put in place. These less technical people will likely, however, not be pleased with purely a command line interface to the tracker. Partially this is because their use cases tend to not deal with one bug at a time but normally traversing, reading, commenting on and modifying several in quick succession. Partially this aversion will be to reduced familiarity with CLI tools as compared to the average developer. For this reason any distributed bug tracker used should also provide a good read-write local web or graphical interface for these less technical users.

Care must also be taken when it comes to helping them know which branch of development to find the appropriate bugs in. For support type staff this is as easy as having them choose the version to file the bug against first and choosing the correct bug repository version based upon that. For QA users it is more a matter of ensuring that the builds they test come along with the bug database. This is most easily accomplished with a fully automated build system which can produce QA testable builds on demand. With such a system QA is given a source tree which is trivially built into a product to test. Then QA need merely use that branch to handle with any bugs for that build.


Public Users


As previously discussed a major outstanding issue, especially for open source projects, is how to provide the public with a useful interface into the project bug tracker. Few read-write web interfaces suitable for public consumption with a distributed bug trackers have been created, though no insurmountable obstacle appears to block the way in most cases. Currently I can only recommend one approach to solving this issue, namely having a readonly user web interface which is updated frequently and handling any bug modification or creation on the part of users as part of a support mailing list.

This will not be as convenient for both the developers and users as a public bug tracker, but is likely to provide better results for both parties. The users, instead of creating a new bug which is likely never to be answered if it is ever read by the developer, will interact with a developer or other support person for the project directly. This will allow the developer to not only determine if this is an existing bug, a step users often never perform correctly, but also ensure that all the necessary information has been acquired before the user leaves and is never heard from again. Many bugs in open source bug trackers are full of incomplete information and the reporting user is nowhere to be found. The user is also better off as there may be a solution to the particular issue they are facing which they will be told about immediately instead of waiting for a WorksForMe resolution of their bug, if that ever comes.

As previously mentioned the developer will be able to extract all the necessary information from the user more easily because the discussion will happen immediately instead of days or weeks later when somebody gets around to viewing the newest bugs on the tracker. Developers also benefit by having fewer duplicate bugs with slightly different information cluttering up the tracker because they'll deduplicate as they go along. An additional advantage is a greater likelihood of a user actually reporting the issue. If the recommended way for a user to report a problem is to a mailing list they are reasonably likely to do so. They are less likely to create yet another account for yet another bug tracker which they will never use again such that they can file one bug which will almost certainly not receive a response.

One critical and necessary aspect of this to remember is that responses to the users on the list must be timely and efficient. It is this requirement of good communication which brings about the benefits for both the developers and users. In fact, this method is how many commercial companies operate, the customers interact directly with support staff who navigate and fill in the necessary information in the bug tracker. The second critical aspect is that the public readonly web view is updated frequently. With an up to date place for users to track the state of their bug, look up resolutions to other similar issues or to point users when they are having a known issue is invaluable and saves developers time. Users prefer to get the answers they want without having to bother the developers and they like to see progress.


Multibranch Overview


One particular issue with distributed bug tracking is that there is not necessarily a single complete view of the bugs at any time. Instead different branches may have different bugs in different and conflicting states. For example a development branch may have fixed a bug but since that branch hasn't yet merged to the trunk the trunk doesn't have that bug marked as fixed. A further example is a release branch having a bug created against it on the complaint of a user, but that bug not having made its way via merging to the trunk and so that bug exists nowhere else in the VCS. These are all examples of the power of distributed bug tracking when it is used to have bug states follow the code flow within the project.

However, sometimes it is useful to have a complete, or more complete view of the bugs. As an example, a project manager may want to know what bugs have been fixed for the coming release, even if not all of those changes have made it to the release branch yet. Perhaps the code must move from a development branch, through a QA branch before arriving in the release branch. It is still important to note the state of the bug that is otherwise marked as open in the release branch is actually closed in some branch. Another situation is one of a developer who works in a development branch. This developer would like to be able to, when he views the bug database, see not just the bug information as it appears in his branch, but also in the trunk in case some new bug or comment relevant to his current branch appears.

This cross-branch bug database merging is an important feature to ensure a wider view of the state of the bug database when such a view is useful. At the time of the this writing only one distributed bug tracker which I am aware of, Nitpick, supports such a facility directly. Indirectly it is possible to script a CLI interface to merge the bug query results across many VCS checkouts. Of course any distributed bug tracker which used off-branch or out-of-tree storage will have neither this disadvantage nor the advantage of having differing branch versions of the bug database.


Using On-Branch as Off-Branch


Distributed bug tracking holds many possible advantages and uses which cannot be filled by traditional centralized bug trackers. But it may be that not all of this power is desired for a particular project. In many cases it is possible to configure the distributed bug tracker, with some scripting effort, to work in a less powerful mode.

For example, while off-branch bug repositories will likely have an easier interface for storing bugs that way it is possible to use an on-branch bug tracker as an off-branch tracker. Simply create a separate branch or checkout for the bug repository and write some scripts which direct the bug tracker to use that branch or checkout instead of putting the bugs beside the source code. This relatively simple step will produce an off-branch bug tracker with the bugs stored in the VCS.

Similarly a setup even more similar to a centralized bug tracker, but with the bugs stored entirely in the VCS could be setup by presenting the web interface to each developer and then directly committed to the VCS. There is even the possibility of doing these simpler setups for some users of the repository while allowing the full distributed capabilities for others, perhaps the remote workers.

In much the same way an on-branch or off-branch bug tracker can be turned into an out-of-tree tracker simply by having a separate VCS repository which contains only the bug repository.


Surviving A Manual Bug Process


In the beginning bug trackers started as simple TODO lists, perhaps with some notes. From there the massive environmental spectrum of bug tracking tools and processes evolved. At the extreme end there are very complicated bug processes and tools. While these sorts of processes can be translated to distributed bug tracking they are likely to be cumbersome and disappointing. Instead distributed bug tracking is better suited to simpler processes and fewer fields. Because all the bug database is available locally to a developer it is simpler to run complex queries as scripts locally. Any datum which isn't extremely common is likely better off as a formatted comment to a bug instead of a custom field with complex automation behind it.

Along these lines a simpler bug process in general is recommended. A small number of bug states, priorities and the like is recommended. If the bug process is simple enough then no automation will be necessary because the developer will have only one obvious choice and will be able to arrive directly at the state they desire. This is contrary to a common setup where the developer must first mark a bug as assigned before marking it resolved before marking it closed. For many simple bugs this is overkill and the developer will spend more time navigating the process than fixing the bug. In such cases the developer wants to be able to skip straight to closing the bug.

With distributed bug tracking it pays to have a clear and simple process. Only a handful of states are needed and most information is better suited to being in a comment than a custom field.


Where to Enter Bugs


One issue which comes up when discussing the abstract theory of distributed bug tracking is that it would be ideal if a bug could be associated with the original commit which introduced the error across all the various branches and clones. This is nice in theory but also impossible in theory. There may be no single commit which introduced the error for one thing. While it is possible to associate a bug with any commits which do introduce an issue that is really just a mapping from bug ID to commit ID. It is possible to construct a bug tracker in this way, and it would be able to cut across branches using this mapping.

Lacking such a system the next best that can be done is to ensure that bugs are entered where the fix would be placed. As an example consider a project with a recent release and a trunk where development continued. During the release stabilization process a branch would have been created for the release while any remaining major bugs where fixed there. All those changes would then be merged back into trunk at a later time. Any bugs found by QA against that stabilizing release should be raised in the release branch. Then when any fixes propagate so too will the bug information.

In a similar way bugs in maintenance releases should be raised in the branch for that release to eventually be merged into the trunk. The fix itself may or may not still be applicable, but since some changes will be the bug database changes should be merged up as well.

Now all this depends on the particular branching and versioning strategy the project uses. If the project doesn't have maintenance releases or doesn't move changes around like that then a different location to report or modify bugs will be appropriate.


Other Distributed Tracking Options


As previously stated distributed bug tracking really started as simple TODO files. As such there are ways of tracking bugs which don't require fully fledged bug tracking software, distributed or otherwise. Most of these are severely limited in several ways, but a project may not hit these limits.

Beyond the simplest TODO lists are things such as the emacs org-mode. This can work well, but may fall apart when multiple developers are involved and will make providing a public read only view into the bug database cumbersome.

Another alternative, which doesn't suffer from this last limitation, is to use wiki software to track bugs. There exist VCS based wikis, such as ikiwiki. These will tend to be usable with a standard text editor but still provide an easy rendering option to provide users on the web. Using a wiki like this will tend to make it difficult for users to find issues that may apply to them expect by reading all the existing issues.


Is This Worth Doing At All?


It may seem odd to have a section which deals with the question of the value of distributed bug tracking so late in the article, but without understanding distributed bug tracking as it is currently known it is quite difficult to make a reasoned judgement on the matter. There are opinions in both directions. Proponents of distributed bug tracking focus on the isolation capabilities, offline support and bug branching, while opponents focus on the collaborative aspects of bug tracking which distributed bug tracking slows down. Both groups have points and the strength of any particular point truly depends on the project in question.

To start consider a project where all work is done on feature or bugfix branches, there is a thorough review process and all real discussion happens on a mailing list with relevant messages copied into the bug tracker manually for reference. In this case distributed bug tracking would seem to have few downsides. All the discussion happens in a broadcast medium, the email list, so every developer can easily get a sense of the current stage and latest debugging information. Since the bugs are fixed on-branches and a thorough review process may cause a large span of time to pass between the bug being fixed and that fix being merged into the trunk the ability of bug states to follow the fixes is very useful, especially if there is some tool support to aggregate bug states across multiple branches.

A different project might instead to the vast majority of its development on the mainline with all discussion occurring via the bug tracker. Here distributed bug tracking seems to have no detriment. Surely the full capabilities are not being used, but perhaps offline support is sufficiently useful. As soon as the developer synchronizes their local copy they will have all the new discussion. This does require that the developer frequently and regularly synchronize, which may be a change in workflow. This is less of a burden with DVCSes, but can be an issue for projects or developers which prefer checking in single, complete units of work as a single commit. Something like committing a few days of work at a single time instead of as several commits over those days. In these situations frequently merging in changes from the trunk may be onerous.

There is then a third case of a project with the branching structure of the first case, but the communication system of the second. That is, all work is done on-branches and the vast majority of the communication occurs exclusively via the bug tracker. This situation can cause some difficulty when using a distributed bug tracker. The time to push a new bug comment up and then have another developer pull it down can be quite significant. There are simple solutions however, the simplest of which is to have the canonical VCS repository have a hook which emails out new bug comments and state changes when those changes are pushed to it. Having the centralized bug tracker email out such information is very common already and shouldn't be an issue.

It remains to be seen which side of the argument will win out, but it currently appears that distributed bug tracking fills a real need, especially when coupled with a DVCS, and has few downsides without relatively simple technical fixes. For the time being at least, distributed bug tracking appears to be useful tool worth using.


Future Thoughts


Distributed bug tracking is a young concept, even considering the age of other concepts in the various fields of computing. As such there are many areas which have not been thought through and it is unlikely that any of the current generation of distributed bug trackers have all the features and functionality which will one day be considered essential. Here are a few ideas or issues which still need resolution with respect to distributed bug tracking.


Tracking Changes


One of the first advantages which comes up when considering distributed bug tracking is the ability for the closing of a bug to follow the change which fixes the bug as it propagates through branches and releases. This works fine with on-branch storage, but there are arguments against on-branch storage related to bug visibility and the length of time it takes for a comment on a bug to propagate to the branch a developer is watching. Off-branch bug storage, however, gives up on the ability for bug state to follow code fixes. One possible solution to this is to, with VCS support, store the change IDs which fixes the bug and then query the database to see if that change exists on the branch in question when showing the bug state.


VCS Storage Limitations


It is appealing to store the bugs directly in the VCS of the project, either beside the code or in their own branch. For a moderate number of bugs and comments this is not an issue. However, the file layouts and formats which are the easiest for the VCSes to merge are not especially efficient and may cause issues when scaling. There does not yet seem too be enough experience to determine how this scaling should be dealt with or even how much of an issue it will become. Should old issues be archived into a more efficient format? Should old issues be deleted from the HEAD of the VCS and rely on the VCS history to retrieve it? Is there some other option which is superior to those mentioned?


References


I haven't explicitly called out references in the text above, but here are some websites which may be of interest where I procured some of my information where it is not original thought or extracted from the software compared above.

  1. http://tychoish.com/rhizome/supporting-distributed-bug-tracking/

  2. http://bytbox.net/blog/2012/10/thoughts-on-distributed-bug-tracking.html

  3. http://nullprogram.com/blog/2009/02/14/

  4. http://www.ericsink.com/entries/dbts_fossil.html

  5. http://erlangish.blogspot.ca/2007/06/distributed-bug-tracking-again.html

  6. http://heapkeeper-heap.github.io/hh/thread_298.html#post-summary-hh-1076

  7. http://dist-bugs.branchable.com

  8. http://evan-tech.livejournal.com/248736.html

  9. http://blog.tplus1.com/index.php/2008/08/01/toward-a-horrifying-new-workflow-system/

  10. http://esr.ibiblio.org/?p=3940

  11. http://blog.ssokolow.com/archives/2011/08/25/topic-glimpse-distributed-issue-tracking/

  12. http://www.raizlabs.com/blog/2007/06/20/linux-distributed-bug-tracker/

  13. http://urchin.earth.li/~twic/Distributed_Bugtrackers.html


Energy Sources

In a previous entry I discussed how the entire world economy can be reduced to moving energy around. That may or may not have made sense to you, dear reader. In an effort to make it clearer I'll try to attack it from a different angle. In this post I'll be discussing where energy comes from and some of the ways we can release it for our use.

In the modern world there are four root sources of energy. All other energy comes either directly from these sources or is a stored form of these sources from the past. The four sources are:

  1. Fusion

  2. Supernovae

  3. Gravity

  4. The Sun

Fusion is the easiest to understand as a root source of energy. Fusion releases energy, simply, by turning two atoms into a single heavier atom. If the mass math (recall E=MC2) works out that there is a bit of extra energy left over then the fusion of those two types of atoms produces energy. If the math works out that additional energy is required then the fusion of those two atom types requires energy. The most talked about form of energy releasing fusion is between two hydrogen atoms. The energy from fusion comes from physics and the basic laws of the universe.

Related to fusion is the energy from supernovae. A supernova is when a large star explodes in a fantastic fashion. As I mentioned above there are two types of fusion: one which releases energy and one which requires energy. I believe that the tipping point is when trying to fuse iron atoms. You can create iron atoms while releasing energy, but to fuse iron atoms together requires that energy be put in. Most of the time that this form of energy sink fusion occurs is during a supernova when there is a lot of energy bouncing around. This creates all the heavier elements, such as uranium. Since supernovae create uranium and it's well known that uranium can be used in nuclear reactors to create energy via nuclear fission. Nuclear fission is simply splitting atoms into two or more lighter atoms to release energy. Thus nuclear fission is releasing the energy stored in a supernova.

It's important for the next sections to note the difference between the source of the energy and how the energy is released. The energy originated in the supernova and nuclear reactors merely release this stored energy. The nuclear fission isn't creating energy, merely releasing it from a stored form.

Have you ever ridden a bicycle down a steep, tall hill? That is one method to release the energy of gravity. In the bicycle example you first had to travel up the hill which is really just storing energy using gravity. True gravitational energy has its origins in the formation of the planets. Planets form out of a dust cloud which is brought together via the force of gravity. When this happens the particles of dust gain velocity and thus energy. When chunks of space debris collide this gravity originating velocity is turned into heat. As the Earth formed many millions such chunks collided causing everything to be get very hot. That's why the Earth's surface was molten when the Earth first formed. As the surface of the Earth cooled and solidified the core stayed molten partially because it takes a long time for this heat to dissipate but also because if you push on things very hard they tend to warm up too. Since there is so much Earth mass the pressure at the core is very high and the compression creates some heat as well.

This gravitationally created energy is accessible in two forms. The first is quite similar to the bicycle mentioned previously. Imagine a mountain with a boulder on top of it. That boulder has been there a very long time and was originally put there by plate tectonics, which is the motion of the Earth's crust over the molten core. If the core wasn't molten and didn't release energy through the crust the crust wouldn't move and no mountains would have been created. Since the energy created by gravity created the mountains the boulder contains stored gravitational energy. We can release it by rolling the boulder down the mountain.

The other way this ancient gravitational energy is available to us is by geothermal power, as used in many places including Iceland. Geothermal power is a class of techniques where water or something similar is pushed deep into the Earth (or less deep in the case of Iceland) to be heated by the heat within the Earth. This hot water or steam is then retrieved via pipes and used to produce electricity or heat buildings.

Finally we have the most common source of energy entering the Earth, the Sun. Humankind uses the energy provided by the Sun in many many ways. The simplest is using the light to warm yourself on a sunny day or reading by sunlight on just about any day. These are the most direct and immediate uses. Some less direct, but still immediate uses include using solar panels to produce electricity to be used immediately.

The Sun provides the vast majority of all the energy used in the world. Most of it is used to keep the outdoors from freezing solid. If there was no Sun then the surface of the Earth would be quite cold, nearly as cold as space. Most of the energy which heats the surface is then reflected off back into space, never to be seen again. The energy which isn't immediately reflected into space warms things such as rocks, lakes and clouds or is used by plants to grow. It is the growth of these plants which powers the ecosystem, including the food we eat.

Food and firewood are examples of releasing solar energy some time after and perhaps some place distant from when the energy was first captured. Usually food is consumed within two years of it being captured and wood within a couple of decades, depending on the age of the tree. Another form of stored solar energy which is somewhat removed in time and space from its capture is hydroelectric power. When the Sun heats water some evaporates to travel through the atmosphere and eventually turn into snow or rain. Some of this will happen in areas of higher elevation and then move downhill towards the ocean in rivers. This energy can be captured using various generation systems such as dams.

Further removed in time and space from the original capture from the Sun are fossil fuels. Fossil fuels are believed to be decomposed and transformed remains of plant and animal life from millions of years ago. As such fossil fuels such as coal and oil represent stored solar energy which was captured over hundreds of thousands of years.

These four sources of energy are the only significant sources available to us on Earth. Every form of energy is either too weak to be useful, such as light from other stars, or is merely a stored form of energy produced from the four sources mentioned. With some thought all forms of energy can be determined to have come from one such source.

Infinitely Evil

There is a set of topics for which, in general, no rational discussion can take place. For these topics the majority of the population can only accept the most extreme solutions and will consistently either misunderstand any point to the contrary or consider you a monster unworthy of association. Both of these situations usually end up with lots of flaming, but very little useful discussion.

I believe it's an unfortunate gap in education which causes this. It takes a significant amount of effort and training to be able to consider positions which trigger a strong gut reaction. Instead of calm contemplation and questioning many people will respond with vitriol and hate.

I term these topics infinitely evil because a rational response to anything depends on how negative it is compared to the likelihood. Something like an expected value calculation. Under this framework the only kinds of things which can result in an inconsiderate stonewall of disagreement is something infinitely negative. That which is infinitely negative is infinitely evil. There is actually two related sets of infinitely evil topics: universal within an ideology and universal within a society.

Universal with an ideology include things like equality of women, abortion, taxes, welfare, drug use, organized crime to name a few in the common North American ideologies. An interesting thing about these ideological evils is that they work both ways. If there is an ideology where some solution to an issue here then there is often another ideology where that solution is infinitely evil. There are rare cases where one ideology is dead set on one solution, but most other ideologies are pretty neutral on the whole topic.

Any infinite evil which isn't ideologically restricted is by definition universal within the society. These are still not truly universal, but may appear so if you never leave the same social region, such as North America or the Western World. Included here are things such as sex crimes, hate crimes, crimes against 'children', murder, Nazis, etc.

Topics such as these are one part of the reason the Internet is such an echo chamber. If every discussion leads to a hot flame war in one forum then you'll likely move on until you find a spot with less disagreeable participants. Unfortunately this tends to be a group of people who agree on these infinitely evil topics. They may discuss them, but only to nod agreement at each other and discuss the finer points of a solution which,to some, is morally repugnant.

The only things which can be done in these situations is to not discuss the infinitely evil topic directly. Instead some more nuanced situation should be discussed. One will just have to hope that the other participants will see the parallels to the more extreme situation on their own.

One final disclaimer, the above is obviously a generalization. One that I've found to be true more often than not in my experience, but a generalization none-the-less.

High Resolution 3D at 48 FPS

Yesterday I watched The Hobbit in high framerate 3D. I did this mostly to see what 48FPS movies looked like. The movie itself was not bad.

The first think that happened is that my view that one shouldn't watch live action 3D films was reaffirmed. The lack of whole scene focus is just too hard on the brain. The Hobbit wasn't as bad as others I've seen, but I still left the theatre with a mild headache. I've also learnt to ignore the background entirely, which I think detracts from the film since I feel that I spent most of the film looking at dwarf noses.

The Hobbit was also filmed in some high resolution process and then downscaled. This certainly increased the fidelity of the images and made them seem much more real. Unfortunately everything seemed somewhat too bright and the additional resolution highlighted lighting discontinuities. As such, scenes shot within one set with natural lighting, such as daytime outdoor shots or indoor shots with sufficient ambient lighting, look quite good. Composite shots or shots where highlight lighting was necessary work out less well because the lighting differences are obvious and unnatural.

I think it is this high resolution process with highlight lighting which is the primary cause of complaints that The Hobbit looks too much like a soap opera. A secondary cause would be the lighting brightness required for the various light sapping processes (3D, high resolution, high framerate) interacting poorly with the inverse square law of light intensity.

The real reason I went to see this movie as I did was to watch the high framerate, just in case it ends up being a flop and fading away like 3D has done in the past. I think 48 FPS filming is a mistake.

The biggest issue is that many actions end up looking either jerky or as if they are happening on fast forward, even if the actual movement happens at a normal pace. I noticed this more during slow motions like moving a book than fast motions like fighting. This rushed feeling is quite jarring and through the entire course of the movies I saw it repeatedly. Not even two hours of practice made the effect go away. This is a severe problem that breaks the immersion in the movie world. I have a couple half baked theories why the higher FPS may cause this problem, but the solutions always seem to boil down to needing an even higher FPS, thus escaping an uncanny valley, or modifying the frame display in some way to aid persistence of vision. It's also possible that simply being more deliberate in motion fixes the issue. Perhaps there was some frame cutting for pacing which caused this issue.

The second issue I saw with the high framerate occurred during action scenes. Specifically, everything moved too fast and was difficult to follow. What's unfortunate is that this is an obvious result of more realistic projection. In the real world action happens in the blink of an eye. It takes a wealth of experience in a particular sport to be able to follow the action when you are unable to see most of it. Additionally, in the real world when your mental picture of some portion of the world is out of date you can quickly check on it, not so in movies where you can only see what the director shows you. In movies the limitations of 24 FPS and the motion blur which comes with it helps the audience understand what's happening, even if they've taken their focus off the sword of the hero to examine his facial expression. The blurry history shows what's important and what's happened over the last quarter of a second.

Both of these issues where most pronounced during camera movement. When the camera was placed at a human level and moving at a human pace (that is, slow as a snail) then it worked out more or less fine. However, as soon as the camera was moved in a way which is dramatic, but humanly impossible the jerking became severe.

I would not recommend watching The Hobbit in 3D High FPS, I think it'd be a better movie in 2D at 24FPS. This is not to say that I think it impossible, just that there are some restrictions on what the film can do. As long as a movie was shot only from a steady cam at human level with a walking camera man, avoided fast action scenes and had all the actors move slightly deliberately I think it could work out fine. It makes you wonder what the point of a drama shot in such an expensive process would be though.

Diggin Up Old Stuff

In channel today the discussion turned to the Twelve Days of Van Epp. This reminded me that I had the Thirteen Days of Crackdown in the archive of my SFU account. In looking at the contents of that archive I found a couple of things worth uploading.

I present, for your enjoyment, Thirteen Days of Crackdown and Meet SFU CS.

Tyranny of the Forums

The web has a pretty rich and varied set of ways to deliver information to a person. There are standard web pages, images, videos, interactive images, audio, tables and wikis to name a few. There is also the web forum.

Web forums are pretty much the default tool of many communities to disperse information. Often this occurs with an admin pinning a post at the top of one of the topic areas and then the appropriate senior member posts a start to this thread with some information and all the information they forgot is filled in as responses to posts from normal members.

They are the default for this sort of thing and I hate it. If you are a casual reader of the community or you arrive well after the fact then you have to read through an often lengthy series of posts in order to extract the relevant information. Information that would otherwise take mere seconds to read off a plain, up to date webpage.

And this is the best case when the community deems the information important enough to pin a thread. If that isn't the case, perhaps because you want some old information which is no longer pinned. Or maybe you want some information provided to the community by a non-senior member. In any case you now not only do you have to read through some thread spread over thirteen pages to find the information you want with clarifications, but you also have to sort through tens or hundreds or thousands of threads to find the one which has your information. God help you if you want to find any amendments in other threads.

Perhaps the worst example of this I have ever seen is at the xdadevelopers forum. There is an amazing amount of knowledge strewn about that forum. Unfortunately it's strewn about and nearly impossible to find. When you try searching for some you usually only get a series of twisty ten page threads full of comments linking to other thirty page threads. It truly is hell on the Web.

Amazing amounts of pain and suffering could be avoided simply by summarizing these important knowledge threads in a wiki. Help end the tyranny of the forums, summarize the information in important threads in a community wiki.

It is all Energy

The entire global economy is simply a complex web of energy flows. One in which money is a placeholder and token for energy.

This fact is not immediately obvious to most, but is easily demonstrated. First consider a worker. This worker has a job which pays him, what he does isn't important. The worker spends his money on several things: food, gadgets, travel and housing being the most important classes.

First consider the simplest case of food bought directly from a farmer. Money goes to the farmer in exchange for some amount of food. Food is the energy which keeps the worker living. So what does this farmer spend the money he receives on? He spends it on the same things as the worker. Thus some of it goes to buying the energy which keeps the farmer living and the rest of it goes to gadgets, travel and housing.

Looking at gadgets, which for the purpose of this discussion I'll define as everything from pots and pans to tools to the latest electronic toy, we see a more complex situation. Certainly part of the cost of a gadget goes to paying the people who make them. Some goes to pay for the energy to run the machines these employees use, more to paying the rent on the factory. Still more goes to paying the trucker to ship the finished items around. There is some profit also in the equation, which goes to the owners of the enterprise.

Finally, the remainder goes to pay for the materials of which the final item is constructed. This could be advanced parts like computer chips or simple materials like raw sand. In the end it doesn't matter because if an item isn't being manufactured for the consumer then it is being manufactured to make things, in the end, for the consumer. If we follow the chain all the way down we arrive at the resource industries: mining, oil and gas.

The miner runs a machine manufactured in the aforementioned web of industry. This machine runs on diesel fuel to move materials out of the ground and into the manufacturing web. The miner and factory workers all spend their money on the same things as the worker and the farmer: food, gadgets, travel and housing.

The world is a spread out place and nothing seems to naturally occur where you want it. Food doesn't grow in the fridge, the worker's job isn't in his living room and finished products don't leave the factory at the worker's doorstep. Things have to move and somebody has to pay for it. Similar to gadgets there are people who work in the travel industry directly and indirectly. Directly by driving buses and ships, indirectly by manufacturing these vehicles or by working in the resource industry to extract the materials and fuel for these machines. Our worker likely has a car they use to get to work and the grocery store. They payed for the manufacturing of this vehicle. They continue to pay for the fuel to run it and the parts to keep it running. Everything else they eventually end up with, or want to get rid of, travels at least some of the distance. Food reaches the grocery store by truck, gadgets leave the house in the garbage by truck and the parts for everything manufactured is moved all over the globe by ships running on diesel.

Of course, all the people who work in or for the transportation industry spend their money on the same things as every other labourer so far: food, gadgets, travel and housing.

That leaves the most complex expense of all, housing. Housing is complex because there are many factors which go into it, some of which have long time spans to consider. We'll start diving in with the simple continuous costs of housing. These include things such as heating the house and keeping the house in good condition. The former is obviously primarily an energy cost. Everybody knows how much heating oil or natural gas or electric heat costs them every year. Keeping a house maintained is also an ongoing expense. Lawns and bushes need to be maintained, wood repainted and concrete resealed. Floors wear out and eventually need replacing. All of these things require parts, labour and tools to accomplish. All the labour requires that somebody perform it, either the owner or a tradesman they hire for the job. The parts and tools are all manufactured as with the gadgets described earlier. All the money for all those things is spent, in the end, on food, gadgets, travel and housing.

The more complicated aspects of housing have to do with the intrinsic value of the property. This starts with the physical space of the property. This is a location in space and is relative to all the other nearby properties. These could be other houses or apartment blocks, they could also be more productive properties like school and stores and factories. In any case there is a finite amount of property and ownership implies some level of control over that property.

From this stems property taxes which are nominally to pay for services performed by the city employees with city owned machinery. Then there is the geographic location of the property. Properties closer to places people want to travel between tend to be more valuable because they save travel costs. If it became possible to instantly teleport between any two points on the earth many would opt to work in the city and live in the middle of nowhere. Currently this has significant costs because travel is not instant and it not nearly free. Properties which are nicer also tend to be considered more valuable. This is usually due to one of several possible reasons. The major ones being newer construction, implying a greater time until expensive maintenance is required; nicer materials, implying a greater manufacturing cost; additional services such as gated communities, implying ongoing costs to maintain these extra services.

Now this is all true for residential, commercial and industrial properties. Agricultural properties is slightly different. Agricultural properties have the majority of their intrinsic value in their ability to produce food. This is a combination of the quality of the soil (poor soil requires manufactured fertilizer, even if manufactured by a horse), the availability of growing water (dry regions need water to be pumped in for a cost) and availability of heat and sunlight (you can grow without natural light or heat, but it must be provided artificially at great expense). While only the last is pure energy, the former two are significant energy/money savings if the naturally occurring resources are sufficient. These resources, in turn, represent stored solar energy either from the distant past as with topsoil or underground reservoirs, or the more recent past as with water flowing down a river to the ocean.

All these things require energy to perform, either directly in the way of running machinery, indirectly by paying a human to do some work or double indirectly via the manufacturing web.

In the end all money is eventually spent by humans. Some humans are able to control more money than they can ever reasonably spend themselves, others have barely enough to survive. Nonetheless, once all accounts are settled every person will spend every penny they manage to spend on food, gadgets, travel and housing.

While money is able to move around this web continually without being destroyed energy cannot do this. The laws of thermodynamics require that at least some of the energy must be irrevocably consumed at each step of the iteration. Thus the system always needs new energy pumped into the system in the way of sunlight and fuel. With sufficient energy it is possible to recycle any material as if it were new. Thus energy is the only true non-recyclable resource. As such, eventually, all the money in the system is passed around to buy energy. Either directly in the form of gasoline or indirectly in the form of purified metal which is the result of energy to mine the minerals and energy to move the minerals and energy to purity the minerals.

In the end, the entire economy is about transforming matter and energy into differently shaped matter and unusable waste energy.

Infinite Impersonal Internet

It's quite possible you've heard about the "Infinite News Stand". In short this is the issue faced by news providers on the Internet where there is effectively an infinite free content. The issue here is obviously how to get paid. You can paywall everything, but then nobody shows up to read your stuff. You could provide free samples, but if everybody did that there is effectively infinite free content available, so why would people pay for more of your stuff?

This is just one aspect of what I, in this post, am going to the Infinite Impersonal Internet Issue. I'll shorten that to I4 to save my wrists. I4 arises because of three facts about the world.

The first is that millions upon millions of people use the Internet for anything and everything you can imagine. In fact, it is impossible for any single person to type even one message to every user of the Internet online at any one time. There are just too many people, there are infinite monkeys.

The second is that, for any particular discussion topic, many people hold the same viewpoint. This can best be seen in popular comment sites such as Reddit. If don't look carefully at the names you'll see that the threads progress in a logic manner as if a cohesive discussion is taking place between a small number of participants. Normally nothing of the sort is true. Instead you have entire cohesive threads where a single participant will often post only once. This is but one obvious and interesting example of the substitutability of people on the Internet. It is difficult, in a practical sense, to differentiate posters except by their stance within the discussion taking place. While it is usually a safe bet that posts from the same account are the same person, it is generally unknowable if different accounts are different people. Accounts tend to be cheap on the Internet, easily produced and easily discarded. It's hard to tell the monkeys apart.

The third fact is simply that most interactions on the Internet are transitory and impersonal. While it is entirely possible to make good friends on the Internet and even to create deep communities that is not the default interaction. The default interaction is to read an article by some named but otherwise anonymous author, further read comments with random nicknames and then perhaps provide a comment yourself. Much like walking down a busy urban street you encounter a setting, take in the response of people you are unlikely to ever see again and perhaps react in your own way. Only rarely will anybody in that crowd distinguish themselves from the faceless mass.

As noted above I4 is not the only form of interaction on the Internet, but it is definitely the most common. It takes real work to construct a community and many interactions between two people before they will see the other's face. It happens everyday, but not every time.

Now what are the consequences of I4? There are many, the most obvious of which are the general ailments of the Internet: trolls, spammers, hate mongers of various sorts and echo chambers. Trolls don't care if you don't like them and ignore them forevermore; they can always troll somebody else or start a new account. Spammers don't even care if you personally respond as long as some minuscule fraction of people do. Hate mongers don't care about you; they already have their own echo chamber which tells them they are right and you are wrong.

This leaves echo chambers. Echo chambers are not unique to the Internet, but are made much larger and more numerous by it. Given an infinite amount of content produced by an infinite number of monkeys which are hard to differentiate, how does one choose which places to return to? Obviously they go where they liked the content the best. This just so happens to be where other like minded people tend to end up. You now have an echo. The chamber is simply a result from the fact that Internet people aren't particularly differentiable. Spend enough time listening to the echo, hearing it from nearly all sides and one starts to believe that everybody on the Internet believes that. The logic is simple and fuzzy: Internet people A through Z think that way and they cover the spectrum of Internet people, therefore all Internet people believe so. Echo chambers are unavoidable, but they prevent alienation. They provide a necessary shared culture.

Not all the consequences of I4 are negative. Take the dual of trolls and spammers is the newbie. The person who doesn't quite know where the fit or quite how to act. I4 allows this person an infinite number of tries under an infinite number of guises to find their place and make their contribution. The dual of hate mongers are constructive communities which combine their intelligence and effort to build great monuments. The dual of echo chambers is exposure to new ideas. These great strengths of the Internet cannot be ignored just as they cannot be separated from their duals.

The next time you see a troll ignore that identity and move on; they may return under a different nickname or move onto some other target and that's alright. If you come upon a den of hate mongers enlighten them; they'll likely view you as a troll or spammer, but that's fine Internet nicknames are free. The next time you are disliked or hated on the Internet don't sweat it; there are a lot of people on the Internet and they are all kinda blurry at this distance.

Sone Freesite Comments

One day, one comment and there's already been a bunch of discussion about my freesite commenting system. I was going to discuss it eventually anyways to add some documentation about how to achieve complex freesite designs, but I'll do it now instead of later to describe some mistakes and alternative options to using Sone threads for freesite.

First how to use Sone to embed a comment thread inside a freesite. The first step is to create a Sone post. You could do this manually, but I prefer to have it automated. I did this using the Sone FCP interface. The exact script I use can be found here. The basic theory is to use the FCP interface to create a thread and then extract the Sone post ID from the result. This post ID will be used to create the URLs we need later.

The next step is to embed the Sone thread into your freesite. You can either do this with a simple link or with an iframe. If you use the iframe method be warned that most browsers can't support too many iframes pointing at Sone off one page. If you have a separate page for each post or a small number of posts on each page then you'll be fine. If you are like me and have many posts on one page, then you'll have to only embed iframes for the first handful and use links for the rest.

In any case the you want to link to is "/Sone/viewPost.html?post=POSTID#post-POSTID", where POSTID is the post ID the script I linked above returns. The reason for the anchor is to have the iframe or link directly to the start of the post instead of the top of the page. This makes things look nicer than having to scroll down through all the header stuff to get to the comments or comment reply button.

For the moment Sone has one severe limitation when it comes to being used as an embedded comment system. That is that it doesn't embed cleanly. There is currently no way to turn off all the unnecessary formatting, such as the Sone information and post boxes. It also has doesn't reflow nicely to narrow sizes. This means that you must make the iframe a significant portion of the width of your page. I currently use approximately 800 pixels wide by 400 pixels tall. It seems to work well enough for the moment.

Perhaps after I finish tweaking my freesite I'll look into adding the necessary support to Sone to embed it nicely.

One important consideration with using Sone as a comment system is which ID to use. There are two different axis each with two different options. The first axis is whether the Sone ID key is the same as the freesite key or not. The second is whether the Sone ID is the same as your primary Sone ID or not.

If the freesite key is the same as the Sone ID then when you create a comment thread and post a link back to the article you will see a stared link, such as in this post. If the Sone ID is different than the freesite key then you'll get a cut down version of the where the key has been removed. I personally prefer the since it provides some information about what the filename of the page is, but either is usable.

If you use the same Sone ID as your primary Sone ID then whenever you upload a new post anybody who follows you will know about it. Another way you can do it is to have an identity solely to post the comment thread starters. The latter has the advantage of not requiring people to see these new posts, they can simply unfollow the comment ID. I'm not sure at this point if people who aren't following the comment ID will necessarily see when there are new comments posted.

If you are starting a freesite from scratch then it doesn't seem to matter whether you use your primary Sone ID or a new comment Sone ID. If you are starting with a large number of existing articles, say if you are moving an existing website into Freenet, then creating a new ID is strongly recommended. This prevents a large number of people suddenly seeing many dozens of new posts with no content.

Currently I use my primary Sone ID to create the comment threads, but I may change that in the future if I verify that people will see response to Sones they don't follow. Luckily switching between which Sone ID you use for comments threads is easy for new articles. Moving old comment threads is impossible and spammy, but there doesn't seem to be any pressing reason to do so.

Update!

As Sone stands on November 27th 2012, you won't see replies to a post of a Sone you aren't following. This means that even if you did use a separate identity for the comment thread starters each person would have to make a choice between not seeing the article announcements and seeing new comments without having to manually visit the comment Sone or the freesite. Neither is a perfect solution.

Update 2!

As of November 29th 2012, USK bookmarks work on a per key basis. That is, you can only really bookmark a USK and not a page within that USK. This poses problems for large freesites. If the site is large it will be difficult for a person to determine what subpage has changed when it is updated. To work around this a site will either have to have a chronological listing of updated content on its index or provide subsite update announcements some other way.

Hello Freenet

I've decided to add some content to Freenet. I won't be adding any confidential material or secret material. Instead I'm going to insert a modified version of my website.

I won't be uploading my full public gallery, it is much too large and not terribly interesting to most people. I will be uploading Freenet only posts in addition to posts I make available on the web. This is mostly because there are certain topics I don't want spread wide and far for political reasons. I see no need to keep them off Freenet though.

I can be found on WoT and most be reached on Sone.

There may be a few awkward moments as I adjust this version of my website for Freenet use, but please bear with me.

CSS

Now I don't claim to be a web developer. I don't do it for money, but I do touch web development every so often. Every time I do I am amazed at the poor design of CSS. I don't really understand how CSS was standardized being so bad.

The idea of CSS is a good one: separate the content from the presentation. It makes code in general cleaner and easier to modify in a consistent manner. This makes sense. But the implementation is halfways useless and falls well short of this goal. It isn't possible to setup the simplest semantic div hierarchy and call it a day. CSS is non-orthogonal and makes many basic presentations needlessly cumbersome to create. It is well known to be impossible to create any moderately complex layout with the ideal semantic hierarchy.

Instead there are hacks use everywhere with divs inside divs inside spans with Javascript thrown on top to get layouts which are functional and visually pleasing. Even then, some things are nearly impossible without manually pixel layout.

You think by CSS version three they would have figured out what they've done wrong and fixed it. Oh how wrong you are. Instead of fixing the major core issues of non-orthoganility, insane limitations and special cases they merely add more half-baked features. At the rate that real issues get fixed I figure that CSS version 10 will be reasonably robust and flexible.

I have no idea what kind of drugs the CSS designers are on, but it must be good.

Time for New Key

This is a short announcement that I have produced a new GPG key. You can find the new key here and my transition statement here. My new key ID is CBA7B85A.

Professional C

The Internet is full of simple language tutorials and bookstores are full of books proclaiming to teach you language X in Y time period. This is not one of them. Instead this is intended to provide a shortcut to all the learning which happens as one works on a large, well engineered and well written codebase. I expect that you already know how to program in at least one programming language and further that you understand the theory of indirection and pointers. I'll cover the basics, but I'll do it at breakneck speeds. You will want to look elsewhere for formal definitions and corner cases.

Basic Syntax and main()


Let's look at a simple example to start with.

#include <stdio.h>
#include <stdlib.h>

#define TYPE_OF_WORLD ("beautiful")
#define DEFAULT_NUM_ITERATIONS (10)

int iterations_remaining(int max_iterations);

/*
 * Utility to send say hello
 *
 * Usage: hello [iterations]
 */
int main(int argn, char **args)
{
        int num_iterations = DEFAULT_NUM_ITERATIONS;

        if (argn == 2) {
                num_iterations = atoi(args[2]);
        }

        for(;;) {
                if (iterations_remaining(num_iterations) > 0)
                        printf("Hello " TYPE_OF_WORLD " World!\n");
                else
                        break;
        }

        return 0;
}

int iterations_remaining(int max_iterations)
{
        static int iterations;

        iterations++;

        return max_iterations - iterations;
}

This simple example shows us much of the basic syntax. It shows us how to include library header files:

#include <stdio.h>
#include <stdlib.h>

How to define constant strings and integers with symbolic names:

#define TYPE_OF_WORLD ("beautiful")
#define DEFAULT_NUM_ITERATIONS (10)

How to declare a function without defining the body. This let's us define the function elsewhere later:

int iterations_remaining(int max_iterations);

How to write a block comment:

/*
 * Utility to send say hello
 *
 * Usage: hello [iterations]
 */

The proper function signature for main():

int main(int argn, char **args)

How to define a variable and optionally set it with an initial value:

int num_iterations = DEFAULT_NUM_ITERATIONS;

The syntax for an if block, how to check for equality, how to set a variable to a value, how to call a function and how to access array elements. Additionally it shows one aspect of the equivalence of pointers and arrays:

if (argn == 2) {
        num_iterations = atoi(args[2]);
}

The syntax for an infinite for loop, how to do simple if-else, how to print text using the standard library, the fact that string constants automatically combine in C and how to break out of a loop early:

for(;;) {
        if (iterations_remaining(num_iterations) > 0)
                printf("Hello " TYPE_OF_WORLD " World!\n");
        else
                break;
}

How to return a value from a function:

return 0;

How to define a function, even one which has been previously declared:

int iterations_remaining(int max_iterations)
{

How to declare a static variable restricted to a function scope along with the fact that static variables are automatically set to zero on program load:

static int iterations;

The post-increment operator and how to perform basic arithmetic:

iterations++;

return max_iterations - iterations;

Overall that's quite a busy example. The syntax so far has all been more or less plain except a few points: including header files, the function signature of main() and the static keyword. I'll leave an explanation of static until later as it is has several uses. C uses header files to contain type definitions, constant definitions, function declarations and sometimes variable definitions. There are two similar forms:

#include <library.h>
#include "project.h"

The first is used for header files from libraries, the latter for header files from the current project. The distinction is a bit murky since some projects are so large as to have library like elements within themselves.

The type signature of main is more or less fixed in stone. Since C is used in so many different domains it isn't actually a fixed rule, but if you are writing a application it is a safe signature to assume. The signature has three parts. The first is the return type, int or integer. This is a signed type and even though an int is usually 32bits long on most modern computers you can only safely return values from 0-255. The exact range is OS specific.

The second element is "int argn". This is the number of arguments in the following string array. This value will never be less than one. The third element, the array of strings, is the argument list. Element zero (args[0]) is the string containing the name the program was executed under. The following elements up to elements argn - 1 are the program arguments as strings.

Though the return type of main() is an int, you can't depend on more than 7 bits being reliably and portably returned. Thus the safest range is 0-127. You can use the values 128-255, but sometimes it will be interpreted as a signed char and sometimes unsigned and so can be confusing.


Control Structures


C has all the basic control structures you need: do-while loops, while loops, for loops, switch statements, if, if-else and goto. Their syntax is pretty simple, but it is important to follow some style conventions to make it readable:

int func(void)
{
        int i;
        int j;

        for (;;) {
                /* Infinite loop */
        }

        for (i = 0, j = 6; i < j; i += 2, j += 1) {
                if (i == 8)
                        break;

                if (j == 10) {
                        continue;
                } else if (j + i == 15) {
                        printf("foo\n");
                }

                j += 1;
        }

        i = 10;
        do {
                i = i / 2;
        } while (i > 0):

        while (i < 15) {
                i++;
        }
}

All this is fairly straight forward. goto will be covered later when we get to handling errors. A while loop can be used to implement an infinite loop, but I prefer using an infinite for loop instead because it is more visually distinctive. Don't be afraid to use goto for jumping out of, into or above loops. Properly placed goto/labels can make code much more obvious and simpler.

When it comes to generic loop counters do use i, j or k unless you have a good reason not to. These loop iterators are well known and short. If you are doing the equivalent of a for loop using a macro foreach-type construct the loop iterator should be named for what you are iterating over, e.g.:

list_foreach(fruit, fruits) {
        /* Do something */
}

It is good practice to write and use macro'd foreach constructs with non-array data structures since they make things clearer.


Datatypes and Abstraction


The key in C when it comes to data structure is that the size of the data structure must be correct. If the size is correct then you'll be able to interpret those bytes in many ways. Some of them will even be correct. Let us start with basic data types:

#include <stdint.h>

int tmp;
char character;
char[10] fixed_length_string;
char *pointer_to_string;
int8_t a;
uint8_t unsigned_a;
int16_t b;
uint16_t unsigned_b;
int32_t c;
uint32_t unsigned_c;
int64_t d;
uint64_t unsigned_d;
float f;
double g;

Use the int type for a general integer work type when you don't need large values. There are various other modifiers which you may see in older code, such as unsigned short int. Don't use them. Instead use the fixed sized integers from stdint.h or the equivalent. stdint.h also has a number of more specialized types for things like the most efficient signed type of at least a given number of bits. These can be advantageous on embedded CPUs where different integer widths can have widely differing performance and code size.

A programming language with just basic types isn't that useful, so C provides struct and unions. A struct is simply an interpretation of an object (objects in C terminology tend to mean just a chunk of memory). As previously mentioned you can interpret chunks of memory in different ways, this is critical to the method of implementing polymorphic object-oriented programming in C. A union is an explicit extension of this interpretation concept. A union is simply a compiler aid to interpreting the same object in different ways. This is most useful when you have dependant data which differ in types and you want to reduce the amount of memory (or bandwidth) an object takes. It can also be useful to reduce the amount of stack space a function requires, which matters in some environments such as kernel work.

struct config_msg {
        uint8_t type;
        union {
                uint64_t transaction_number;
                char username[32];
        } u;
};

int func(int value)
{
        union {
                struct success_msg success;
                struct failure_msg failure;
        };
        int len;

        if (value == SUCCESS) {
                success.header.type = SUCCESS_MSG;
                success.value = value;
                len = sizeof(success);
        } else if (value == FAILURE) {
                failure.header.type = FAILURE_MSG;
                failure.value = value;
                failure.source = SOURCE_ANALOG;
                len = sizeof(failure);
        }

        return send(&success, len);
}

There are a few things to note about the above example which are quite important. The first is that, unless otherwise configured, the compiler is free to add padding inside a structure to ensure that the elements are aligned. In this case that means that there will be padding (usually 3 or 7 bytes depending on architecture) after the type element. You can disable this on a per structure definition basis, but how to do so is compiler specific.

Secondly, note that the members of a union do not have to be the same size. Any memory which is past the end of a member will simply not be accessed.

Finally, a note about anonymous unions and structs or unions without type names. An anonymous union is a union which doesn't have a type name or a variable name, as is seen with the union in the function above. Use anonymous unions sparingly as they can confuse what memory belongs to which variables. Perhaps the only acceptable location is in function temporary variable definitions.

As shown above structs and unions don't have to have type names. This can be useful when creating a struct or union inside a typed struct to help organize the data. It can also be useful to construct a custom typed struct for limited use within a function. However, you should always have a variable name in that case, as in the case with the union u above.

Something you may see is typedef'ing structs, like so:

typedef struct index_type_t_{
        uint8_t type;
        uint32_t index;
} index_type_t;

I don't recommend this in general usage because it makes the code harder to read when it only saves a small number of typing characters. Using a programmer's editor and those last few characters likely aren't typed to begin with. The C language comes with a separate struct namespace and it would be foolish not to use it. Do note that there are use cases for typedef and even for typedef'ing structs when it comes to library interfaces and basic types. For example, it is entirely reasonable to typedef a uint16_t to be called index_t since that makes it easy in the future to change the size of index_t without having to check all the existing code line by line. Every use of index_t will have to be checked however, for correct intermediate uses. Some code may store the value in an int for example, which would make changing to a uint64_t incorrect on most platforms. It is good practice to use the correct type names whenever possible to avoid this problem; if you are manipulating an index_t you should use only index_t typed variables, even if it current is the same as uint32_t.


static


Before we move onto object oriented programming in C and interface design we need to understand the static keyword. The static keyword has two major uses: defining a variable with a known and unchanging memory location and restricting the symbol visibility of a function to a single file.

The first use, static variables, is easy to understand abstractly, but as you'd need to understand linking (which I won't cover here) we won't go any deeper than that. There are two defining properties of static variables:

  1. Every static variable is initialized to zero at program initialization with no further work from the programmer.

  2. Every time the static variable is used from a function it is the same variable which contains whatever value it did the last time it was used.

Static functions, as previously mentioned, restrict the visibility of that function to the current compilation unit (more or less the current file). This should be used whenever feasible to restrict the visibility of functions to their minimum necessary. Minimal visibility is advantageous as it makes it easier to refactor functions, delete unused functions and use short, descriptive names for functions and it reduces redundancy because forward function declarations for static function are unnecessary in most cases. Consider the name differences between these two functions:

int libname_component_toggle_fob(struct fob *fob)
{
        ...
}

int toggle_fob(struct fob *fob)
{
        ...
}

void foo(struct list *fobs)
{
        struct fob *fob;

        list_foreach(fob, fobs) {
                libname_component_toggle_fob(fob);
        }

        list_foreach(fob, fobs) {
                toggle_fob(fob);
        }
}

As you can see the shorter name is better as it makes the code easier to read. We will also be sure that toggle_fob() will never be used by name outside this file so it makes it easy to check for all instances when we decide to rename it or modify what it does.


Object Oriented C


Many people believe that C is only a procedural language and further that you need special language support to implement object oriented programming and especially polymorphism. This is not correct. In fact, object oriented programming in C is not only possible, but it is not noticeably more complex than normal C programming and is more flexible than some object oriented languages.

They key to using the object oriented style in C is that you have to pass the object you are operating on explicitly to the methods. Usually this is done as the first argument. Past that there are two ways to implement object oriented programming in C. If you don't want polymorphism it's as simple as you would believe:

struct fob {
        char name[10];
        uint16_t num;
};

void fob_init(struct fob *fob)
{
        memset(fob, 0, sizeof(*fob));
}

void fob_setname(struct fob *fob, char *newname)
{
        strncpy(&fob->name, newname, sizeof(fob->name));
        fob->name[sizeof(fob->name)] = '\0';
}

int main(int argn, char **args)
{
        struct fob *fob;

        fob = malloc(sizeof(*fob));
        if (!fob)
                return 1;

        fob_init(fob);
        fob_setname(fob, "myname");

        free(fob);

        return 0;
}

Note the use of "sizeof(*fob)" and "sizeof(fob->name)". You can use sizeof on things like these and it is recommended to do so instead of the traditional tactic of defining the size and then using that, or doing "sizeof(struct fob)". Using the former forms when possible prevents errors when the type of foo is changed since the correct size is determined by the compiler. It also reduces the number of definitions which have to be exported and named.

As you can see, static object oriented programming in C isn't difficult. This form works equally well with object allocated on the heap or the stack.

If you want to support polymorphism then a bit more work needs to be done, but that's to be expected given the greater capabilities. This is a simple example of one way to implement polymorphism. With more work and some helper macros it is possible, though not always advisable, to achieve usages which require less typing.

struct fob_ops;

struct fob {
        struct fob_ops *ops;

        char name[10];
        int32_t value;
};

struct fob_ops {
        void (*setname)(struct fob *fob, char newname);
        int (*getval)(struct fob *fob);
};

static void fob_setname(struct fob *fob, char newname) { ... }
static void fob_getval(struct fob *fob) { ... }

static const struct fob_ops fob_ops = {
        .setname = fob_setname,
        .getval = fob_getval,
};

void fob_init(struct fob *fob)
{
        memset(fob, 0, sizeof(*fob));

        fob->ops = fob_ops;
}

struct gizmo_ops;

struct gizmo {
        struct fob fob;

        int8_t type;
};

struct gizmo_ops {
        void (*setname)(struct gizmo *gizmo, char newname);
        int (*getval)(struct gizmo *gizmo);
        int (*settype)(struct gizmo *gizmo, int8_t type);
};

static void gizmo_getval(struct gizmo *gizmo) { ... }
static void gizmo_settype(struct gizmo *gizmo) { ... }

static const struct gizmo_ops gizmo_ops = {
        .setname = fob_setname, /* Will produce a type warning */
        .getval = gizmo_getval,
        .settype = gizmo_settype,
};

void gizmo_init(struct gizmo *gizmo)
{
        memset(gizmo, 0, sizeof(*gizmo));

        gizmo->ops = gizmo_ops;
}

int main(int argn, char **args)
{
        struct gizmo *gizmo;
        struct fob *fob;

        gizmo = malloc(sizeof(*gizmo));
        if (!gizmo)
                return 1;

        gizmo_init(gizmo):

        gizmo->fob.ops->settype(gizmo, 6);
        gizmo->fob.ops->setname(gizmo);

        fob = (struct *fob) gizmo; /* Downcast */

        fob->ops->getval(fob);

        return 0;
}

Note that whenever there is a list of items that the last item, where possible, should also have the separator. That way if an item is added after the current last item in the list the code diff will be cleaner.

This example shows that while there is a bit more manual setup for polymorphism in C, that the usage isn't too onerous in practice. The reason all of this works is because we take different views of the memory object. The critical thing to remember with this is that the order of elements must be maintained. That is, in the ops structures the superclass op struct must be included first. Similarly the superclass structure itself must be included first in the subclasses.

You'll note the awkward usage "gizmo->fob.ops->getval(...)". This can easily be avoided if we have two definitions for each class's struct. The internal one which is presented pretty much as above and an externally visible one which contains the ops element first and then a filler array which brings the total structure size such that the real internal one and the interface external one are the same size. This is often not necessary as the majority of uses in a well designed system only use each (OO) object as if it were the superclass.


Error Handling and Function Documentation


It may seem odd at first that error handling and documentation are in the same section, however part of being a professional is doing the annoying things which help future developers. One key component of this is having good API design. Even if you can't have good API design you can still have good error checking and documentation.

Error handling and function documentation, excepting one specific question, simple and formulaic. This is good as being formulaic helps prevent coding mistakes.

/*
 * Whatchmacallit the whosit.
 *
 * Whosit the whatchmacallit as long as the stars are
 * aligned correctly. This function immediately performs the
 * action. It is assumed that the caller holds the gizmo
 * lock. The argument howhard is how hard to try
 * whatchmacalliting. A valid range is -1 through 13, the
 * meanings of these values described in the definition of
 * struct whosit.
 *
 * Returns:
 *    EINVAL    - The whosit is not initialized
 *    EDOM      - howhard is out of range
 *    EEXIST    - Action does not exist
 *    ETIMEDOUT - Authentication server timed out
 *    ENOENT    - Failed to succeed, try harder
 *    ENOMEM    - Failed to allocate necessary memory
 */
 int whosit_whatchmacallit(struct whosit *whosit, int howhard)
 {
        int result;
        struct action *action = NULL, *a;

        if (howhard < 1 || howhard > 13)
                return EDOM;
        
        if (!whosit || !whosit->initted)
                return EINVAL;

        /* Grab a reference to the global list so it will
         * not disappear from under us
         */
        get_list(&global_action_list);

        foreach_list_entry(a, global_action_list) {
                spinlock(&a->lock);
                if (action_matches(a, whosit->action_num)) {
                        action = a;
                        break;
                }
                spinunlock(&a->lock);
        }
        if (!action) {
                result = EEXIST;
                goto out_nounlock;
        }

        get_action(action);

        whosit->lock(whosit);

        result = check_authentication(whosit);
        if (result = ETIMEDOUT) {
                return = ETIMEDOUT;
                goto out;
        }

        whosit->tries = howhard;

        whosit->data = malloc(DATASIZE);
        if (!whosit->data) {
                result = ENOMEM;
                goto out;
        }

        result = whosit->try(whosit);
        if (result != 0)
                goto err_out;

out:
        whosit->unlock(whosit);
        put_action(action);
        spinunlock(&action->lock);

out_nounlock:
        put_list(&global_action_list);

        return result;

err_out:
        free(whosit->data);
        whosit->data = NULL;
        goto out;
 }

When it comes to documenting a function there are a few mandatory parts. Firstly is a short description of the function. Some manuals recommend that this be a single line. I personally don't mind fewer than three lines, but it should be a short summary. After this is a more complete description of the function. This description includes any documentation about the arguments, including where to find more information if this isn't the canonical source. All side effects and other calling requirements (such as which locks must be held) are also mentioned. Finally there is an exhaustive list of error codes this function can return along with the cause of each one. This includes error codes which may be passed up from functions this function itself calls. While it is possible for a developer to walk the code when the code is available, they shouldn't have to.

An important consideration is where to put this large block of documentation. There are two different spots and which is appropriate depends on what kind of code you are writing. If you are writing code where you expect the user to have the full source code, such as if you are writing a function in an application, then put this documentation block along with the definition of the function as above. The reason for this is that it is usually easier to jump to the definition of a function than it's declaration. This is especially true if you have multiple versions of the function (for different platforms perhaps). If, however, this is a library function where you expect that the developer will not have access to the full source code, then this block should go with the function declaration on the header file.

As this example shows it is not only acceptable, but preferable to error out early and to use goto. Relating to erroring out early there is a school of thought where every function should have only one return point. This causes terrible nesting as erroneous cases pile up. It is better to use the return statement as intended early to check on preconditions.

Once the main body of the function has started though you often can't just return at random. There will be cleanup required. There are three ways to handle this. The first is to copy and paste the code into every area where exiting is required. This is obviously not recommended in the general case because code changes and copy and paste mistakes are very common. The second alternative is to nest the code such that the appropriate cleanup code is always run. This is difficult to read and modify. The recommended approach it to use goto as in the example above.

As an aside, goto is not evil and to be avoided at all costs. The original paper (Go To Considered Harmful) which decried the use of goto is usually taken out of historical context. When Go To Considered Harmful was written the predominant languages of the time didn't provide support for structured programming, that is, they didn't have language constructs for for loops, or switch statements. In such a case conditionals and goto is all most programmers had. This is obviously more confusing than necessary. Without goto algorithms are often forced to use trigger variables to exit loops or do any especially complex flow. You must be responsible in the use of goto, but it is still a tool with a valid place in the professional toolbox.

As this example shows the use of gotos allow a clear implementation of complex cleanup operations which must be partially performed depending on when the function reached an error. This is done without repetition of code and without cluttering the normal control flow. The key to this ability is the fact that the cleanup operations are listed in reverse order to when their dirtying operations were performed in the function. It is then a simple matter of jumping into the cleanup at the appropriate place on error. It is also possible, as seen above, to have special error exit status code after the final return to handle cases which differ from the normal cleanup actions.

Whenever possible you should use standard existing error codes. This makes it easier to get some initial sense of the cause of the error. ETIMEDOUT may not have a text definition which matches the use here, but it does extend the general idea of the error.


Callbacks


Callbacks are a common sight when dealing with complex data structures or library code. Contrary to popular opinion callbacks, especially immediately executed ones, are rather simple in C. By immediately executed I mean the callback is passed to a library function, perhaps to iterate over a complex data structure, and once the flow of execution has returned to the calling function the callback function and context is free to be released.

struct walker_context {
        int num_processed;
        int action_type;
};

static int walker(struct element *element, void *data)
{
        struct walker_context *context = data;

        if (!element) {
                /* Perform final iteration work */
        printf("Processed all %d entries\n", context->num_processed);
                return 0;
        }

        context->num_processed++;

        switch(context->action_type) {
                case 1:
                        action1(element);
                        break;
                case 2:
                        action2(element);
                        break;
        }

        return 0;
}

int act(struct list *list, int action)
{
        struct walker_context context;
        int result;

        memset(&context, 0, sizeof(context));
        context->action_type = action;

        result = foreach_list_element(list, walker, &context);

        return result;
}

We see that, for an immediately executed callback, things are pretty simple. Define the context structure your particular call back will use. Then define your callback making it static if possible. Inside this function you take the void* data passed to you and immediately convert it into the context structure the function expects. Note that no casting is necessary or should be used. Many times a NULL element will be passed in when the iteration is complete, to allow this function to be self contained in its action and as a signal when processing is complete. Usually there is a return value for "continue processing" and another for "error, stop processing".

Since the callback is executed immediately and not stored for processing later, as would be the case if this callback were to be called because of some event, we can allocate the context structure directly on the stack. Initialize the context and then pass everything to the library function, foreach_list_element in this case. That's all there is to it really.


Style


Formatting C code is a subtle task, but there are a few guidelines:

  1. Use whitespace. Whitespace makes code easier to parse with the dense by reducing its density.

  2. Align when it makes sense

  3. Use vertical space. Much like prose is separated into paragraphs, code should be separated into sections by vertical space. Doing so makes it easier to know which chunks must be digested at one time. One blank line is usually sufficient.

  4. Don't indent too deep. Deep nesting makes logic more difficult than necessary to follow. Either reformat the code to nest less deeply (perhaps with some careful use of gotos) or factor some of the inner code out into a separate function.

  5. Follow the coding standard. You may not personally agree with the coding standards of the project you work on, but as a professional it is nonetheless your responsibility to follow them. Starting with a good coding standard (such as the Linux Coding Standard) make life easier, but being consistent is the most important factor.

Beyond formatting there are also elements of style concerning how the code is written. In general the code should be written in the most simplistic manner which doesn't negatively effect the performance or conciseness of the code. Macros should only be used when they provide a repeatable and obvious benefit. Sometimes repetitive code is acceptable.


Next Steps


This article is not an exhaustive description of all the tricks and situations a modern professional C programmer may come across. Instead it is merely a quick summary. For a large set of examples you should look at the source of well written C code bases. Examples of this include Linux, QEMU, and many others. As with most disciplines the best way to learn is to watch those more experienced.


Bathtub Plains

Bathtub Plains: The flat part at the bottom of the failure curve. Many electronics never make it to the Bathtub Plains when they die early. Of course, many people don't keep their electronics that long anyways.

Outlook Killer

I don't know anybody who claims to like Outlook and yet you see it used in corporations everywhere. There's several good reasons for that. I'm going to tackle the calendaring abilities. Don't make the mistake of assuming that you can replace Outlook without doing calendaring.

These are the critical requirements to match Outlook's calendaring abilities. If you don't meet or exceed any of these you will fail to replace Outlook.

  1. The ability to look at coworker's schedules when planning a meeting.

  2. Updating or rescheduling a meeting is trivial.

  3. Events work exactly like email. You can email all the attendees of the event without any extra work. Just open the event and reply to all the attendees.

  4. The calendaring must be in the same application as email. If a separate application is required then it will not be used as often as it should and the integration won't be as good as it needs to be. Corporations use meeting invitations primarily as email threads which just happen to have reminders and update the available schedule for item one above.

  5. As a corollary to three and four above, event invitations must work with list of people. These lists should be already existing lists such as those already used for email. Nobody will create lists just for calendaring.

  6. Moderately complex recurrence setups must be supported. If I want a meeting to happen every Monday and Tuesday forever the that should be easy. It must also be easy if I want to move or cancel any single occurrence.

  7. Timezones must be supported mostly correct. If I send an invitation to somebody then they must see the invitation in their current timezone. Furthermore, if they proceed to change timezones then the reminder must occur at the correct time.

  8. The calendaring must be useful enough to be used standalone to do calendaring. If it is missing any critical calendaring features then it won't be used and the schedule lookups from one won't be useful.

  9. It isn't a web app. I know that all the cool kids think web apps are the future and no developer would be caught dead in this day and age without a 24/7 network connection. But that doesn't matter. People who use Outlook for real work and real calendaring sometimes do so disconnected (airplanes) or with bad connections (hotels or that office wifi which never really works in that far meeting room). They need to be able to work without a network connection and they need to be able to access their archives. These archives will be several gigabytes in size. Additionally, if the reminders aren't reliable then nobody will use the software. This itself necessitates a local component.

  10. You must be able to book rooms as well as people. Sure you, as a developer may not have ever actually booked a room, but the people who spend the money have.

  11. It must be possible to easily schedule meetings with people in a different organization than you. Back when inter-organization calendaring was a new thing this was less important. But now it matters and that means interoperating with the leader, Outlook.

  12. Invitations must have a positive response of reception and attendance intent. No excuses.

Outlook certainly isn't the best of email clients. In fact it is likely the cause of many of the worst abuses and most ineffective email practises in the world. But even if you hate it Outlook is well adapted to the corporate environment and calendaring is one key for that success.

Fun in the Slow Lane

Recently I was reading a some posts from a developer who lives in a very rural area in a house on solar power. You can read about it here and here and here. The short of it is that he spends most of his time accessing the Internet via dialup and he has a few coping mechanisms.

It's been quite a while since I've used dialup so I was interested to try it out again, but don't actually want to have to setup a dialup account. Instead I looked up how to throttle bandwidth on my laptop. On my Mac you can turn your connection into a modem for most purposes (http, https, irc, ssh and dns) with the following script:

sudo ipfw pipe 1 config bw 3KBytes/s delay 150ms
sudo ipfw add 1 pipe 1 src-port 80
sudo ipfw add 2 pipe 1 dst-port 80
sudo ipfw add 3 pipe 1 src-port 22
sudo ipfw add 4 pipe 1 dst-port 22
sudo ipfw add 5 pipe 1 src-port 6667
sudo ipfw add 6 pipe 1 dst-port 6667
sudo ipfw add 7 pipe 1 src-port 443
sudo ipfw add 8 pipe 1 dst-port 443
sudo ipfw add 9 pipe 1 src-port 53
sudo ipfw add 10 pipe 1 dst-port 53

You then turn your network connection back to normal when you've had your fun with

sudo ipfw delete 1
sudo ipfw delete 2
sudo ipfw delete 3
sudo ipfw delete 4
sudo ipfw delete 5
sudo ipfw delete 6
sudo ipfw delete 7
sudo ipfw delete 8
sudo ipfw delete 9
sudo ipfw delete 10

Have fun surfing in the slow lane and remember, you can turn of automatically loading images.

The Networked Future is Local

Modern telecom networks are an amazing thing. I can trade packets between two semi-remote areas across the world with ease. I can be on the move in a car on the highway and be working at the same time on a server thousands of kilometres away. It's a great thing that works often enough. Network performance has been increasing all around for years, which is good because the amount of data we want to push through the network has also been constantly increasing. Unfortunately we are quickly reaching limits of physics with respect to the performance of these networks and that will require a change in how we use the network.

The first and most obvious limit is one of energy. We all love our mobile devices, but the batteries never last long enough. Battery technology isn't making significant headway and the radio is already a significant power drain on modern smartphones. In the quest for longer battery life we will have to transfer data more efficiently and transfer less.

The second most obvious limit is the speed of light. We just can't beat the speed of light, but we are sure putting our best foot forward to match it. Currently a good latency time for the Internet is within a factor of two of the latency of light over the same distance. We can help this along slightly with better technology and more significantly with more direct cables, but we will end up at the limit sooner than later. Latency has a significant effect on the type of activities which can be performed without annoying the user. If you've ever had a transcontinental phone call you know what I mean, you are always interrupting each other while you wait for your voice to span the globe. Even today many websites use CDNs to move parts of their data geographically closer to the end user to reduce the pain of latency.

This third limitation is bandwidth. Wireless bandwidth, unlike wired bandwidth, is limited by the laws of physics to quite small bandwidths. Worse, that bandwidth must be shared by all the users near you. Thus while you can get hundreds of megabits of bandwidth to your desk if you pay enough, you'll be hard pressed to get much more than tens of megabits on any cell network.

All these limitations lead to it being advantageous to store as much of the data you are working on locally as possible. Now it isn't possible to store all of the data, but if you have 95% of it then you can save a lot of time waiting for it to be transfered over the thin straw of cellular Internet. The rise of DVCSes is just one indication that local copies of most of the required data are coming back into style. In fact some systems, like email, have worked this way from the beginning.

I would be on the lookout for things which leverage the network, but insulate themselves from slow networks by storing as much data as necessary locally. DVCSes are one example, but distributed bug trackers are also gaining mind share. It may not be long before you don't have to fear that wiki server going down or being on the other side of the world, because it's easy to have your own local copy to hold you over until it is restored.

Basics of Effective Corporate Email

Email is perhaps the most used communication method in the world after human speech. Email is, in many cases, the only communication mechanism worth talking about at large corporations. It is unfortunate that there is no guidance given on how to use email effectively in a corporate environment. Instead people are left to figure it out and many just continue as if their corporate email account is just a busier personal account. This doesn't scale.

Before you can effectively use corporate email you need to understand a few features of email. The first is the concept of the inbox. You may have seen a picture of the desk of a desk worker from the early 20th century. On that desk you would see a blotter (that big piece of paper in the middle of the desk, often a calendar), an assortment of pencils, pens, paper clips, letter openers and at least two baskets. One would be labelled "Outbox" the other "Inbox". The theory was that throughout the day the mail boy would deliver mail as it arrived into the Inbox and take completed mail from the Outbox to be delivered. Sometimes there would be a third basket for items which couldn't be handled immediately.

In modern email the Outbox still exists, but is usually empty as the email is sent immediately. The only times it tends to get used are when handling email offline, when the email can't be sent immediately it will be queued to be sent the next time a network connection is available, or when the email server is down. The Inbox, of course, still exists. I'll come back to how to use the Inbox effectively.

Email also has three different addressee fields: To, CC and BCC. To should contain the recipients you want to respond or act on the email. CC should contain the recipients you want to read the email, but you don't expect a response or comment. CC is mostly for ensuring that people are kept up to date. BCC is used to send a copy of the email to recipients without other recipients knowing that they received it.

One major difference between postal mail and email is that replying to a large list of people is incredibly easy. No longer do you have to write one copy of the response and then have a secretary make copies. Instead the computer does it all with a touch of a button. In the corporate environment you should never use the "Reply" button. Instead you should always use "Reply-all" and then trim the To and CC list as necessary. This requires two important rules be followed. The first is to NEVER respond to an email unless a response is required or you have something useful to add. If you receive a mass email erroneously ignore it. Responding will simply cause more mass mails to be sent out. If the erroneous emails persist respond to the sender directly, not the entire list of recipients. During a discussion it often happens that a person no longer has any useful input. Such a case could happen if somebody was added to the discussion to help with one portion of an issue which has been resolved and the discussion has moved on. Should this happen the individual may be removed by moving their name to the BCC recipient list with a note in the message about how that person is being removed from the discussion. The removed individual is removed from the discussion, saving them time, they have the opportunity to re-add themselves and everything is done politely. Future responses started using the "Reply-all" button will work as expected.

The rules to follow to make effective use of the Inbox are rather simple. The key is to keep the Inbox as empty as possible. This is easily done as long as you follow a few simple guidelines. The first is that the Inbox is not a TODO list. You can use it as such for items which will be done in the next day or two. If the action is further out than that you are better off using a proper TODO list, even if that list is on paper.

The second guideline is respond to email in a timely manner. This doesn't mean that you have to continually check your email and respond the instant a new message arrives. In fact that is a recipe for being unproductive. Instead, when you reach a point where handling email is the right thing to do you should respond to as much email as you can. After you have handled a message (which could be just reading it, a quick response or possibly a more involved action such as looking some information up) you must move it out of your Inbox into another email folder.

There are differing schools of thought on how to handle email folders. Some people have quite intricate folder arrangements. I prefer a simpler approach of a single email folder (which is not the Inbox) for each project. Given the search capabilities of the modern computer I find any more granular manual sorting to be a waste of my time. Of course if you have automated emails which are more specialized in nature feel free to setup filters and other email folders to store those items. It's useful to keep such noise out of the primary Inbox.

There are two states a message can be inside an Inbox: read and unread. When handling emails you should always deal with all the unread messages first, oldest to newest and then all the read messages newest to oldest. This ensures that messages get the most timely response, which makes your coworkers more effective and more likely to help you in the future. Here are the steps which should be followed when handling your email:

  1. For every unread email, oldest to newest

    1. If a response is not required move to archive

    2. If a brief response is required send that response immediately

    3. If a lengthy response or action is required mark the message as read and move to the next message.

  2. For every read email, newest to oldest, including messages marked read above

    1. Is the email still relevant? Perhaps a new message has made the message no longer require a response from you. If so move to archive folder.

    2. Do you know what you need to know to write the lengthy response or have time to perform the action? If so you should do so and then move the message to the archive folder.

    3. Has this email been in your Inbox for more than a week? If so put the action on your TODO list and move the message to the archive folder. If the item doesn't belong on your TODO list, perhaps because you don't actually intend to perform the action. In that case simply move the email to the archive folder.

If you follow this algorithm to handle your email you should never have a constantly growing Inbox. If you follow this algorithm and find yourself still falling behind you are likely just overworked and need to find some solution to reduce the amount of email you receive. Delegation works reasonably well.

One True Way: Tabs

One of the endless holy wars of computing is tabs versus spaces in code. Basically every corporate coding policies dictate using spaces. Many open source communities (such as Python) dictate spaces as well. One notable exception is Linux.

All the people who recommend using spaces are wrong. The one true way to do indentation is tabs for indentation and then spaces for alignment. Specifically, tabs should never appear after any character on a line which is not a tab. This has one advantage indenting with spaces cannot match: the programmer can view the code with whichever indentation suits them best. If you have a low resolution monitor and fresh eyes, go right ahead and use two character indentation. After you've been coding for a day and a night on your high resolution 27" monitor you can move up to eight characters. More standard coders can use four characters. That one weird guy who likes three characters will be happy too.

With tab indentation and space alignment a developer can see any of these options whenever they wish without changing the source code. All it requires is a programmer's editor and a minor configuration to your terminal which I describe below.

Eight character indentation:

#include <stdio.h>
#include <netinet/in.h>

struct foo {
        struct sockaddr_in ip;
        int                id;
        char               server_name[1024];
};

Four character indentation:

#include <stdio.h>
#include <netinet/in.h>

struct foo {
    struct sockaddr_in ip;
    int                id;
    char               server_name[1024];
};

Three characters:

#include <stdio.h>
#include <netinet/in.h>

struct foo {
   struct sockaddr_in ip;
   int                id;
   char               server_name[1024];
};

Two characters:

#include <stdio.h>
#include <netinet/in.h>

struct foo {
  struct sockaddr_in ip;
  int                id;
  char               server_name[1024];
};

All the alignment stays consistent since it is done with spaces. If tabs were used in the alignment then it wouldn't look near as good. Here we have the same code aligned with a combination of tabs and then spaces. At eight character tabs it looks fine:

#include <stdio.h>
#include <netinet/in.h>

struct foo {
        struct sockaddr_in ip;
        int               id;
        char              server_name[1024];
};

However, when we change the indentation to two spaces it no longer lines up:

#include <stdio.h>
#include <netinet/in.h>

struct foo {
  struct sockaddr_in ip;
  int       id;
  char      server_name[1024];
};

So you should always indent with tabs and align with spaces. Most good editors should have options which make this easy. Vim, for example, lets you use hard tabs with the 'noexpand' option and by setting the automatic indentation options correctly it will automatically indent with spaces for you.

Editors are one thing, but getting diffs and the like in terminals working is another. Until just the other day I didn't know how to fix that problem easily. However, I have discovered the tabs(1) utility. It's available at least on OSX and Linux, so I expect that it is available everywhere. With this utility you can set the tabwidth of a terminal with ease. For character indentation is only a 'tabs -4' away.

Distributed Bug Tracking

Distributed bug tracking as an idea has been floating around the Internet for six or seven years now. And there have been several attempts:

Unfortunately each of these suffers from some combination of the following problems:

  • Being unmaintained

  • Not providing a graphical interface

  • Being VCS specific

  • Being a VCS (Fossil, Veracity)

  • Having issue formats which don't merge well

  • Not easily tracking the state of a bug inside a specific branch

  • Seeming to not dogfood themselves

I have been interested in a distributed bug tracker to use with my personal projects for several years, but the field never seemed to improve. The leading options always seemed to have one of the issues listed above. I tried a couple and found them lacking to the point where I would quickly stop using them and revert to TODO lists.

Finally I truly needed a distributed bug tracker quite severely. So I broke down and wrote one. You can find the manual for Nitpick here. Nitpick avoids all the issues above while being simple and lightweight enough start using quickly.

Roundabout Here

How can one go on a road trip and not discuss the roads? I, for one, won't be the first to start. So let's have a brief summary of the roads in New Zealand!

The roads in New Zealand are actually rather good overall. They are well constructed with clear signage which means business. If you see a sharp corner sign then you know that a sharp corner is coming up for which you must slow. The vast majority of the roads are two lane highways with grassed shoulders. There is quite little asphalt and most of the highways are chip and seal. This makes for quite a bit of road noise, but traction seemed good in most cases. At the very least water tend not to pool on the road, but you'll still pick it up with the ground suction as you pass over it. Nicely road glare is kept to the minimum.

Then there are the roundabouts. They are used everywhere possible to good effect. It's a bit of a shame that Canada doesn't have as many roundabouts as they are fuel efficient and keep traffic moving. They are fuel efficient in that except during heavy traffic you tend not to have to stop the car and then accelerate again. You also spend very little time idling waiting for a light.

Now the key to a good roundabout is size. The traffic circles I've seen in Canada tend to be too small. Roundabouts really must be large enough that the inner circle can go around at 30km/h without too much trouble.

Traffic in New Zealand tended to be light and polite. It was a lot like driving in the Maritimes. People will pull over to the side of the road to let you pass if you drive like a standard Lower Mainlander. Now I didn't drive myself, Don did all the driving, but on those long highway stretches we had plenty of time to discuss it. The New Zealand speed limit is 100km/h. I heard that there was some talk of increasing it, but I don't think that would be in New Zealand's best interest. That speed is not too slow to make good progress through the country. Overall the roads are good and nice drives, except when the high desert roads are fogged in and it's raining cats and dogs.

Things I Wish I Brought

As with any trip you discover things you wish you had brought and things you wish you had left during the course of the trip. These are the things I wish I had brought.

The first is a small travel power bar. I had the necessary travel adapter and had confirmed that my devices would work with the simple adapter. However I only had one. In these days the standard traveller carries at least a laptop, phone and camera. Well, it may be easy enough at home to charge all three of those devices at once, but if you only have a single adapter it becomes more difficult. I do wish that I had brought a small three socket travel power bar to split the adapter. It would also help with the fear I felt when hanging my laptop charge off a wall socket held up by nothing other than the travel adapter. I will definitely bring one along the next time I leave North America.

The second thing I wish I had brought was some string or light rope to fashion into a strap for my water bottle. I really detest travelling without water, so the first thing I did upon arriving in New Zealand is buy a bottle of water to refill throughout the trip. That certainly worked well, but I had no convenience way to carry it. I was fine in situations where I could have my backpack, but that isn't always possible. Five feet of light cordage would have provided my a solution for this problem. Next time I'll make such a sling before I leave for the airport.

More British Than You

Many countries across the world are more or less British. This isn't surprising as most of them are former colonies of Britain. There is a definite gradient however. At the low end you have the USA. It almost seems that they made a consistent and conscious effort to avoid being British.

Take the colour of post boxes. In Canada, New Zealand and apparently Britain they are red. I think they tend to be blue in the USA; at least the postal colour is blue and not red in the USA. Then you have accent. Britain, Australia, New Zealand and South Africa all have British-esque accents. They aren't the same, but they resemble each other. The USA accent differs significantly.

Having visited neither Australia nor South Africa you must take this next comment as a baseless supposition, but I believe that among the countries under discussion (Australia, Canada, New Zealand, South Africa, USA) that the next least British are the Australians. At least in the past couple of decades it has seemed that Australia has positioned itself to be more similar to the USA than the prototypical British colony.

This is partially born out with the relationship between Australia and New Zealand. It is quite similar in many respects to the relationship between Canada and the US. For one thing Australia has a larger population and seems more willing to tout its own horn than New Zealand. Then there is the fact that many Kiwis head over to Australia to try to make their fortune in much the same way Canadians sometimes do. Of course being so close there is ample tourism between the countries, with Australia having sunnier beaches. I have heard that Kiwis get similarly insulted if you believe them to be Australian.

So I believe the comparison is apt. But how does New Zealand compare to Canada on the colony scale? I would have to say that New Zealand is, without a doubt, more British. They drive smaller cars on the left side of the road, they have the British-esque accent, they eat more meat pies. They also seem to not believe in insulation or double glazed windows. I suppose it makes perfect sense. New Zealand is still a remote country and Canada has strong French and American influences.

Brown Custom Code and Bits Co.

An interesting aspect of New Zealand culture is that they appear to name things after people which wouldn't be named in that way in North America. Take Dick Smith's for example. This is a chain electronics store. I saw several example of chains or otherwise medium sized enterprises named after people. You don't see this much any more in North America.

I doubt there is any deep meaning behind this observation, but it could be because of the smaller scale of the country. With only 4.3 million people spread rather thinly over the two major islands it's likely possible for a family owned business to make a niche for itself and remain. I don't know however, it could be something else.

World of Tomorrow, Music of Yesteryear

Whenever I travel I tend like to listen to the local radio to get a feel for the local culture. New Zealand has been no different. The first thing to note is that New Zealand seems to only have about four radio stations which are retransmitted all over the country. For example. There was one radio station we were listening to a couple hours south of Auckland. When we got out of range of that transmitter we assumed we'd never hear from the colour commentary again, which was a shame since they were pretty good. However, no sooner do we go to find a new station than we find the same people on a different frequency. In fact, we were able to listen to this station on and off down as far as we travelled. North and South Islands.

Of course I spent some time listening to Radio New Zealand. There are two stations, a traditional variety station, similar to CBC Radio 1, and a music station, which seems to play concert music, but I must confess to not having listened to it much. The content was pretty much what you'd expect. Some news, some intellectual discussion and commenting on current events. There was also one music show by request which had a quite eclectic mix.

As to the content of the rest of the stations, I'm not sure if it's the people I am driving with or just New Zealanders, but it seems that most of it is older rock. There was one country station, but that station was vetoed almost immediately. The rest of the stations we've listened to have tended to be rock from the 90's and early in the previous decade. This isn't even us being picky. Since we are touring we are covering a lot of ground and moving out of transmitter range frequently enough that we tend to stick for a little while on the first station we find.

It isn't just on the radio where I heard this older rock. At this conference we were mooching off of there was a banquet with a band. The band was quite good, but tended to cover older rock tunes. Maybe this is just because of the type of people who were attending the conference.

Listening to local radio is an interesting view into the local culture. New Zealand may be the world of tomorrow, but it's music seems to have come from yesteryear.

Christchurch

Today we visited Christchurch. The condition of Christchurch is better than I had feared. It seems that most of the city either escaped severe damage or has been repaired. However, a large portion of the city centre is off limits to the public.

This Red Zone is a very large demolition and construction area. Unfortunately as it is so expansive and the military guard at every entrance so alert that you can't get in to see much of the centre. This includes the Christchurch Cathedral. However, we walked the perimeter and found one street view of the cathedral two blocks away. It wasn't a good view, but it was a view.

The feel of the city is a bit odd. Many of the buildings look brand new or are recently refurbished. However, interspersed are damaged buildings and buildings which are a bit rundown. It is all lively however. It is unfortunate that we won't have much time to stay. However the obvious construction boom makes it difficult since most of the rooms in the city are taken and many roads are blocked off.

I think Vancouver would be lucky to look as good as Christchurch two years after the big one.

Clean SVN URL Update

It turns out that you don't need to modify your subversion configuration file to use the SSH wrapper script. The default setting first checks the SVN_SSH environment variable, so you can put the path to the script there instead. This is very useful if you share your configuration files among many machines since shell has conditionals.

Meat and Potatoes

Local cuisine is one of the joys of travel for the open minded. As long as you don't mind asking what it is after you have tried it you will do fine. Now this is New Zealand, not a place known for its exotic tastes. What it does have are meat pies.

I am a fan of meat pies. What is not to like about a nice flaky crust surrounding slow cooked meats and vegetables doused in a nice thick, flavourful gravy? So the abundance of meat pies pleased me. In fact, you can get meat pies and other savoury pies just about everywhere except high end restaurants. Cafes, street bakeries, gas stations. They all have pies and they are all quite good.

The only fault of the pies here is the lack of vegetables. The pies could be much improved with some more carrot and peas. This problem is not restricted to the pies however. It was generally agreed among my travelling party that the food here, while good, was light on vegetables and fruit. You got ample meat and potatoes and pastry, but little of the good for you stuff. I also noticed that vegetables seemed quite expensive at grocery stores.

Other than that one fault I had, on my journey, excellent: beef, pork, chicken, fish, venison, lamb, mutton and shellfish (This bad grammar left in because it makes Courteney hurt inside). I didn't see until late in the trip that there are some free range rabbits, and so didn't get a chance to try that. Vegetables may be in short supply, but the Kiwis know their meat and potatoes.

Travelling Rich

I once read something to the effect that the truly wealthy of the world don't interact with the real world much at all. The theory goes that as they travel the world they stay in hotels which are all high end and pretty much the same, they eat high end food in French styles, they shop in high end shops which are all pretty much the same. The only big differences are the local language, local architecture and the local currency, when they bother to concern themselves with that.

Now I'm not going to claim that I have travelled rich in this way, but I believe I've had a bare taste of it. Nothing but fine meals and controlled tours for several days along with this conference. From what I've seen it's pretty true. It's an experience you can get in any large city in the world and it's quite difficult to tell where you are if you don't look for it.

I suppose it appeals to some, but I think it's something people would do not because they can or because they enjoy it, but because it is expected. It's not the sort of travel for me. I prefer to travel in minimal comfort. Rent a car, stay in affordable hotels, eat at common restaurants. Actually experience the country.

Kiwi Efficiency

Have you ever seen the road work signs going up on a major route and dreaded the coming weeks of traffic snarls and rough roads? It would seem that isn't the way it happens in New Zealand. I woke on Saturday morning to see a major street in Auckland half blocked off with trucks parked off into the distance. When I returned into the city that evening nearly the entire street had been stripped for resurfacing. We woke Sunday to work already beginning as soon as legal. Upon returning in the evening on Sunday we saw the entire road section resurfaced. Fresh lines were painted after we returned from dinner.

That's the way road work should be done. Quick and cleanly.

I'm not sure if this is indicative of any general trend, but I have noticed a few other things related to efficiency in my time here. The first is that neither of the two hotels I've seen have built in air conditioners or central heating. I have seen or heard ads for heat pumps, house insulation, efficient light bulbs, water conservation and other minor conservation measures. As I mentioned previously vehicles tend to be smaller, but I did see a television commercial promoting fuel efficient vehicle choices and choosing just as much vehicle as is required.

I get the sense that environmental concerns are at the forefront of the public consciousness. Perhaps this is due to the previous environmental devastation which was wrought on the unique ecology of New Zealand though the centuries. There are strong movements and punishments for further destruction of the few remaining pristine wilderness areas on isolated islands.

Then there is the furniture and houses. All the furniture I have seen has been minimalistic and well made. This is in contrast to the trend of the past few years in North America of ever larger oversized furniture. I will admit to not having seen much non-commercial furniture, but all the hotel and restaurant furniture I have seen is well made. Perhaps this come about due to a relative scarcity of local wood suitable to construct furniture.

I haven't seen any house interiors, but in general the houses appear nowhere near as large as in North America. The parts of New Zealand I have seen so far are quite similar to the Maritimes in housing sizes and condition.

Meal sizes also seem to be more moderated. To be fair I have been eating at higher end restaurants during my first few days here, but even the street bakeries and cafes have very reasonably sized portions. Restaurant food seems a bit expensive which may play into it. Or they may just have better portion control. There are certainly few obviously overweight people lumbering around the streets.

I get the sense that New Zealanders are a quite efficient people who accomplish precisely sufficient outcomes with little unnecessary waste. This seems to fit well with their resource and production situation. It is quite obvious that the economy is having a rough spell. I'm not sure that this is restricted only to the recent past and believe that the parallels to the Maritimes is equally valid here.

Pickup Trucks and Tow Hitches

When it comes to personal transportation in New Zealand there are a few notable differences from North America. The first is that diesel and gasoline prices are quite out of alignment with their energy ratios. Since diesel contains about a third more energy than gasoline, you would expect that diesel would cost about a third more than gasoline. In New Zealand, yesterday, the ratios were reversed. Diesel was approximately $1.56 while gas was $2.20. This is almost exactly reversed. No wonder small turbo diesel motors are so popular.

The second thing to notice is that, like much of the rest of the world, they don't have full size pickup trucks. Given the cost of fuel and size of their roads (fairly narrow with nearly no shoulder), this isn't surprising. Trucks in general are also relatively rare, restricted mostly to those who seem to actually need one. It seems that most people who have a vehicle for sports purposes have SUVs. New Zealand full size seems to be North American mid-size.

Notice that I said that only those who truly need a truck on a regular basis have one. What about the people who need to carry stuff only every so often? Well, many more cars have tow hitches here. Even the sporty car we ended up renting (4L V8 baby) has a tow rating of 1600KG. So it would seem that people here buy the vehicle they need and aren't afraid to use a small trailer when necessary. This makes perfect sense since a small trailer can carry as much as one of these trucks (though I haven't yet checked to see what the pickup trucks have as a bed load rating) for much less initial cost (you only need to buy a trailer once and it'll last many years) and much less continuing fuel cost. And yet you still have just about as much capability. Certainly enough capability for the person who buys a truck and never leaves the highways with it.

I think if you look at the vehicle choices of New Zealand you'll see the future for North America, especially Canada. For much less overall cost they maintain sufficient capability. Of course there are still large trailers and heavy trucks (which include one tons over here it seems) which can be bought when they are truly necessary, but those aren't necessary near as often as you see lifted one ton pickups with six litre diesel engines rolling down Lower Mainland streets.

Some Peculiarities of Air Travel

Here are some things I noticed during my 26 hour journey from the door step of my apartment in New Westminster to the New Zealand side of Customs.

Vancouver airport now has free wifi which is decent. It even has, in some lounges, power points which can be used to recharge. I'm a bit surprised at this because I thought wifi and power were one of the major perks of the status and business class lounges. Note, however, that since power isn't everywhere you may be sitting at a different terminal than the one you will be leaving by. Also note that the initial boarding announcements are not made across the entire terminal, but only at the gate you should be boarding on. The final boarding announcements are made over the entire terminal though. Which is good because otherwise we would have missed our flight leaving Vancouver.

The second thing I note is that when you travel by air your passport is likely to actually get stamped. I wasn't expecting this since crossing the land border with the US never results in a stamp.

LAX also claims to have free wifi and some power points. The power points are much sparser than at Vancouver airport though. Also when I was spending an extended four hour layover there the Wifi near my gate didn't work. Maybe there wasn't actually any free wifi.

It seems that either planes have gotten quiet in general, or that they use quieter planes for international travel. My experience with domestic flights in Canada is that the noise always started to bother me a couple hours after the flight began. That didn't happen this time. I suppose it's also possible that I'm starting to lose my hearing, but I hope not.

Airline food isn't actually that bad. I wouldn't call it five star, but the meals I had on Air Pacific were quite edible and well spaced. On the topic of food, I thought it would be a good idea to buy a couple of bottles of water after security in Vancouver airport. I expected to use this a bit on the first leg, mostly on the second and suffer the third flight. This doesn't actually work. It seems that no matter that you just got off an international plane, you will have to go through security again to get onto another international plane. You may even have to travel between different buildings of the airport, walking across land from the country you are in. If you want to bring water on your flight you'll have to buy it again for every flight separately.

Suit bags are pretty well designed pieces of luggage, but they don't work that well if the hotel room you book doesn't have a full size hanger bar so that the suit bag can be hung up while open. It is also very important that you close the hanger clip inside the suit bag, or everything will fall down into a heap when you go to open the bag up after hanging it up. Go me.

SVN over SSH With a Clean Repository URL

The biggest problem with the svn+ssh protocol is that the repository URLs leak too much information about where the repository is. It just doesn't look clean. svn+ssh://servername/home/me/repos/foo just doesn't look good.

It happens that this is easy to fix. First write a small script on the workstation:

#!/bin/bash

ssh $1 'svnserve -t -r/home/me/repos'

Put this file somewhere appropriate, say ~/.subversion/svnssh.sh and make it executable. Now you merely need to modify the subversion configuration file, usally ~/.subversion/config to set the tunnel program for ssh to be this script.

After doing this you are able to use the svn+ssh://server/foo as the . You may want to include some additional logic to support multiple servers, but that is a simple extension.

Professionalism

There is a class of jobs described as The Professions. This post is not about them. This post is instead about what it means to be a professional and how it can apply to any job. Professionalism, at its core, is about doing things the right way, even when doing so is contrary to human nature, personally detrimental and not obviously necessary.

Let us discuss each in turn in the context of software development. Within every professional software developer is a craftsman. Some part of the developer enjoys doing good work and feels pride in it. This is where the most obvious conflict of human nature and professionalism begins. If one takes pride in their works they feel some measure of ownership. Not ownership in the copyright sense, but ownership in the sense of defender. The original developer is the expert of that piece of code and since he feels pride in having written it, will strive to maintain its quality and form. Ownership in this sense is at odds with professionalism. Professionals must work in teams and feeling that one owns some code is at odds with effective teamwork.

Another aspect of the professional is that they put doing a good job over employment advancement or job security. This is done in several ways. The most notable is documenting the system appropriately, but it isn't externally obvious that all the necessary documentation and automation exists. It is quite easy for a developer to automate some complicated and necessary process with a bit of scripting, but then neglect to polish that tool sufficiently for use by the rest of the team. In fact, perhaps the most important indicator of the willingness to make oneself easily replaceable is the professional's understanding of the team. To a professional the team is not just the developer himself. The team isn't even just the existing people currently working on the project. Instead the team is an abstract set which contains not only the developer as they are, but the developer thirty years from now after they have forgotten everything to do with this project. Even more expansively, the team includes members not yet hired and the professional's replacement. A professional makes themselves easy to replace, often to the point of maintaining documents to orient their replacement and at time training their replacement as one of their final acts before moving on.

Then there are the things about a job which just don't seem that important. In the software world these are the design documents being kept up to date, having appropriate code comments or even providing useful explanations of what otherwise incomprehensible errors mean in particular circumstances. Perhaps most important is documenting and automating testing. These tasks are tedious and thankless. These tasks are also the key to good maintainability and a sign of a well polished development setup. A professional makes themselves replaceable and does the job to completion, not just the fun and interesting bits.

Professionalism is what differentiates software engineers from mere programmers. As with most things professionalism is a spectrum. I'm slowly moving my way towards being an exemplary professional, but I'm not there yet. I'll get there, one step at a time.

Thoughts of the Day

In a post scarcity world only attention hours are in short supply.

Usenet isn't dying, it just turned into Reddit.

Straight Razor Shaving: A Few Tips

In the past shaving used to be a luxury. I've heard that men would shave twice a week on Thursday and Sunday. For centuries this was done with the equivalent of the straight razor. These days there are many different tools and products used to shave and shaving is no longer a weekly luxury taken in at a barber shop with beer and chatter. Now shaving is a daily burden required to greater or lessor degrees by polite society.

In any case there is still not a shave to be found which is better than that of a straight razor. As with many of the old traditions which used to be passed down from father to son much of the knowledge of how to use a straight razor has been lost to the majority. So I have put here the extent of my limited experience with straight razor shaving.

Rule one of straight razor shaving: Never move the blade parallel to the blade. Always move it perpendicular to the blade, like you are trying to sweep something up with it.

Rule two of straight razor shaving: Always move the blade perpendicular to the edge, like sweeping. To do otherwise will leave you horribly scarred.

The key to a good, close shave is the sharpness of the blade and the smoothness of the cutting motion. A sharp blade cuts the hair without pulling it, resulting in the hair being cut level with the skin and not below the skin. The smoothness of the cutting motion is important to keep the skin from bunching up.

There are four keys to a nice smooth shaving motion:

  1. Practice. The following tips will help, but nothing beats practice. Without a steady hand you aren't going to have a smooth shave no matter what you do.

  2. Pull the skin. As you shave a region just tug on the skin a little bit to tighten it slightly. This will prevent bunching of the skin by ensuring there is less loose skin to bunch.

  3. Before you shave you must soak the hairs in hot water. This will soften the hairs, which makes them easier to cut through as well as opening the pores, which pushes the hairs slightly outwards from the skin. Shaving straight out of a hot shower is recommended, but if that isn't possible a hot towel will suffice. Use the hottest water in the towel that you can stand and keep the towels on the face for no less than five minutes, ten is better. If the towel starts to get cool you will need to use another towel.

    I also recommend that you shave in a hot, humid room, such as the bathroom after a hot shower, in order to keep the pores open as much as possible and to prevent the hairs from drying out. How much a problem these two are depends on your particular characteristics.

  4. Shaving soap. Sure you see ads all over the place for shaving creams which promise you the world. Don't believe them, they lie. The major reason to use shaving cream and the like is to lubricate the skin. If your shaving cream isn't doing that you are using the wrong stuff.

    I recommend proper shaving soap. This is soap which is high in glycerin. Properly lathered this soap will further soften the hairs, lubricate and clean out the pores.

    To use shaving soap (or other high glycerin soap) you must first lather it. To do so put some soap into the lathering cup. I prefer to buy shaving soap which comes already in a suitable container, but you can us a regular cup if you wish and then just cut some soap into it. You don't need much soap for each shave as the secret is in lathering it. Now you will need a badger hair lather brush, accept no substitutes. You take the brush and dip it shallowly into hot water. Then you lather the soap using this brush in a circular motion in the lather cup. You only want to use a little water, I dip my brush about a quarter of an inch, as too much water will prevent lathering and you can always add more water if you aren't seeing the results you desire. Lathering should only take a minute or two and you should end up with a rich, thick lather which sticks to the brush and the side of the cup.

    Now that you have the lather you apply it to the skin using the brush in long strokes. For the most part these strokes should be up and down like you are painting the skin. You should cover all the skin you intend to shave and then a little bit. If you are a slow shaver or the temperature and humidity of the room does not allow you may wish to lather only part of the area at any one time. The lather must still be thick when you go to shave the area. If you need to reapply the soap you can give the soap a few quick lathering motions first to ensure a good lather.

    Once you have finished shaving it is important to clean the lather brush properly or its performance will degrade. The proper way to clean a badger hair brush is to run it under slightly warm water while gently squeezing it with your hands. The action is allowing water to soak in and the squeezing the soaping water out. Continue this until no more soap soap comes out. Then you should get the majority of the water out of the brush with a couple gentle squeezes and then a small number of wrist flicks. It is ideal to then hang the brush in a well ventilated area so dry, but I've not had trouble with standing the brush up on its handle.

    A good shaving soap is very important to a good shave. In fact, I have had excellent results with cheap disposable razors as long as I have used proper shaving soap with a good lather. A good shaving soap will also prevent or greatly reduce razor burn by reducing the friction of shaving.

The magic of a straight razor is the sharpness of the blade. This is why the single blade of a straight razor can provide a superior shave to the multibladed razors of the modern age. Unfortunately I have no good advice on sharpening a straight razor, I ruined my blade on my first attempt. I can only suggest finding some old man who has great experience hand sharpening blades to extreme keenness or to attempt proper knife sharpening system. As far as I can tell the razors should be sharpened to 17 degree angle.

Though I can offer no great advice on sharpening a straight razor I can tell you how to keep one sharp. To keep a straight razor sharp there are really three things to keep in mind:

  1. Follow the recommendations for a smooth shaving motion above. The best way to keep a blade sharp is to not dull it in the first place. Most of those recommendations, but especially the hot water soak, are important for softening the hairs and making them easier to cut through. The softer the hairs the less wear on the blade edge.

  2. Rest your razor. As you shave the edge of the razor gets slightly bent through the resistance of the hairs. Resting the razor is simply not using it for a time. Gentlemen of times past used to have one straight razor for each day of the week to ensure that the razors stayed sharp as long as possible. This is because the bent edge will return to almost the original position is not bent to severely and given time to rest.

    If using a personal razor you should have no problem resting the razor at least a day between uses. Resting it longer will reduce the wear and so if possible it may be advisable to either shave on alternating days or to rotate between multiple razors. Additionally if you are shaving large areas of skin, such as legs, with a straight razor you might consider using several razors in one shaving session to reduce the wear.

  3. As resting the razor undoes some of the wear so does stropping. Stropping should be done on conditioned leather only. There are two common types of strops, solid ones which are made on wood blocks and flexible ones. Which you use really depends on the layout of the shaving space. I prefer the flexible ones, though they require something to hang one end off.

    Stropping should be done only at the beginning of shaving, never after shaving. You can strop during shaving if the razor is no longer sharp enough, but that will cause dulling and I recommend against it if possible. To strop you simply ensure that the strop is straight or nearly straight and lightly drag the razor along the strop, blunt end first. You should strop alternating sides of the razor. The easiest way to do this is to lightly place the razor on the strop nearest yourself with the edge pointed to you. Then drag the razor away from you towards the other end of the strop. Then when you have nearly reached the end you should flip the razor over onto its other side. Flip over the blunt edge of the razor. If you try to flip it any other way you will mess up from time to time and either cut your strop, damage the razor, dull the razor prematurely or at the very least have to spend more time stropping. You should strop each side gently about twenty times.

    If your strop is narrower than you razor you will have to strop the entire blade either by moving the blade during the stropping motion or by stropping the side nearest the tang on one trip along the strop and back and the side furthest from the tang on the next trip.

    The point of stropping is to realign the edge of the razor which was bent out of shape when you shave. If the edge only has to be moved back a little bit it is likely to bend easily. However if the edge has to be moved back too much some of it will break off, causing dulling. It is for this reason that stropping should only be done after resting the razor in order to keep the razor sharp.

There are a few more things to keep in mind about maintaining a straight razor. Firstly you must always remember that a knick in the razor is a bloody knick in your skin. As such it is critical that you take good care of the razor and keep fix all rust and edge damage as soon as possible. Secondly no matter how skillful and diligent and maintaining the sharpness of your razor it will need to be sharpened eventually. If you are taking care of the razor and shaving a face every day or two the razor should only need to be sharpened once or twice a year.

So that is all the theory about using a straight razor. As long as you remember Rule One of Straight Razor Shaving you should be able to learn without too much trouble or long lasting injury. I will now list out the steps I use when shaving with a straight razor. Use this as a suggestion, but shaving is a personal thing and you'll have to experiment some to figure out what works best for you.

  1. I always try to shave directly after a shower. I always try to end the shower with nice and hot water to ensure a steamy bathroom. I find I just don't have the patience for hot towels at home. If I ever get a barber shave though, hot towels all the way. It's just a shame that most barber shops don't serve beer these days like they used to.

    When I step out of the shower I dry everything except my face. I want my face as wet as possible. I keep the door closed and try to keep the bathroom as humid as possible.

  2. After ensuring a soaked face I strop the razor. This shouldn't take more than a minute or two.

  3. Then I lather the soap as described above. I put on a thick coating over my entire face. I find it helps keep the hairs soft even if I have to relather later.

  4. I shave, first the left side of my face with my right hand and then the right side of my face with my left hand. Being left handed has endowed me with some useful abilities. I shave in sections where each section is mostly flat. So my cheek down to the jaw line is one section. Under the jaw and on the neck is another. Special care is taken under the nose and on the chin. Always shave with the grain the first time over. If you have areas where the hair grows in circles or changes direction you will have to go over that area once for each direction. Always lather the area before shaving it.

    After every stroke of the razor I rinse the razor off with hot water. I find that this helps keep my skin warm and my pores open.

    Each stroke goes from the top of my face to the bottom of my face. The razor is held between thirty and forty degrees to the skin where zero degrees would be laying the razor flat on my skin and ninety degree would be the blade sticking straight into my skin.

    Never move the blade parallel to the blade. That is, never move the straight razor in a sawing motion. Doing so will cut straight into you and likely leave you with conversation starting scars. Always move the blade perpendicular to the blade, like you are brushing something with it. If you must turn a curve do it gradually.

  5. Now that I have finished the first time over I will consider additional passes. Shaving against the grain is a way to achieve an even closer shave, but I find that I end up with ingrown hairs if I do. It appears that many men are this way so shaving against the grain is optional.

    I will, however, often find some spot which I hadn't shaved sufficiently well for my tastes. These areas I will lather with soap again and then shave again. I try to avoid shaving an area in this way more than two or three times to avoid razor burn.

  6. After I am satisfied with my shave I take a towel soaked with the coldest water I can find and press the towel against my face. I do this to cool my face, close my pores and stem any bleeding from small cuts. I'll often need to soak the towel twice. Once my face is cool I use the towel to pat and wipe off any remaining soap.

    Some people suggest using aftershave at this point, but I've never seen the point. Using shaving soap ensures that any small cuts are clean along with the skin itself. The soap can also be lightly scented if you desire that. Finishing with an ice cold towel also does wonders for stopping any bleeding.

  7. Then it is time to clean up. Since I lather in a lather cup I simply put the lather cup away. I rinse the lather brush and stand it up to dry. I ensure that the razor has been rinsed off and that there is no water left sitting on the blade. Patting with a towel or a couple of very careful flicks will remove the water. I rinse and wring the towel and hang it up to dry.

  8. I then present my freshly shaved face to my wife for inspection. I usually pass with flying colours.

I believe that is all one can really be told about shaving with a straight razor. It is a rewarding skill, but does require practice. For somebody considering starting out shaving in this fashion I would suggest buying a new, factory sharpened razor. Though it seems counter intuitive, as long as you remember the first rule of shaving with a straight razor, a sharp razor is less dangerous and painful than a dull one.

Sketch of P2P DNS

Several months ago the US government started confiscating domain names. This started some portions of the Internet honking like a gaggle of geese. One concept which came up from this was a P2P DNS system which would be resistant to such government intervention. Creating such a system presents three difficulties:

  1. Ensuring a domain is only 'registered' once

  2. Allowing the owner to modify the domain at will

  3. Distributing the domain information

The first is easily done by having a single master public key which signs the key of the domain with a date. The domain key with the earliest signature date is the correct one. This implies that ownership of the domain key is ownership of the domain. The central authority can only hand out ownership of a domain once. Furthermore lost keys will result in an unmodifiable domain.

The second requires that the owner of the key sign the updates with the domain key and a date. Properly signed version information with the latest date is the correct one to use.

The third requirement is more complicated, but there are several distributed hash tables out there which would seem to fit the bill. In the worst case some simple eventually consistent P2P system could be created with relative ease.

The reason such a system would work is that they do not attempt to completely replace DNS. Instead it merely replaces the root and TLD DNS servers. Each domain would be represented in this system by the same information which is already returned by a normal DNS query use in the hierarchical hostname search. That is, it would contain at least the authoritative DNS servers for the domain.

Though such a system could be extended to contain all the hostnames on the Internet it would grow quite large. It would seem that there is a low limit to the amount of data a person will deem reasonable to allocate to P2P infrastructure. Not extending it as such leaves the domain DNS servers as the first vulnerable point in the chain of viewing a webpage, but no more vulnerable than the content servers themselves.

Reliability versus Dependability

Reliability is the capability of a tool to perform a specific purpose without failure. Dependability is subtly different from reliability in that it is the capability of a tool to perform the task of the moment sufficiently well. Perhaps a short listing of expectations will help:

Reliability:

  • Perform the designed task successfully every time

  • May exclude manual intervention

Dependability:

  • Perform the prescribed task as often as possible

  • Gracefully degrade under adverse conditions

  • Be abusable, work on the fringe of the stated purpose

  • Be amiable to jury rigging either repairs or modification

Take, for example, a ball point pen. Ball point pens are designed to write on dry, clean paper. A reliable pen will do so until the ink runs out. A dependable pen, however, will also be able to write, if with difficulty, on dirty, crumpled paper. Wet paper or things which aren't paper at all, such as wood, are also markable by a dependable pen. More so even than just marking in adverse conditions a dependable pen can be used as a small lever or to push small buttons.

These latter uses don't use the pen for it's marking ability, but instead misuse the physical properties of the pen. Dependable pens have these free variables of construction, strength of the body for example, modified to be more useful in some ways.

In general institutions desire reliability, because they will have one tool for each task, while individuals desire dependability, since they have a much wider variety of needs and circumstances.

Mesh Networks

The recent Egyptian network blackout has caused a surge in interest in decentralized mesh networks. For quite a while I've personally thought that a wide spread mesh network would be great. Unfortunately while I know that such a network could be created using available technology, I don't believe it ever will. There are several reasons, but they essentially come down to two overriding reasons: geek density is too low and everybody expects real time communication. That is the won't read version.

The more detailed reasoning is based upon technical restrictions. The conceptual framework of a mesh network that I'll be using in comprised of four facts:

  1. Every node is connects to a number of mediums. Any node may be connected to as few as one medium, e.g. using their one wifi card, to as many as ten media, e.g. a couple shared wifi channels and a handful of point to point links using directed wifi or Free Space Optics. Nodes connect primarily to other nodes geographically nearby.

  2. Every link between nodes has approximately the same bandwidth as every other link. For the purposes of theory we can assume that every link is equally capable. For the purposes of a practical implementation we can assume that the achieved throughput ranges between 1Mbps and 100Mbps. This is a reasonable assumption as all the commonly available 802.11 Wifi protocols have achievable throughputs in this range. I deem it unlikely that a significant portion of the network will have Gigbit Ethernet links to each other.

  3. Point to point links may use some private medium, such as a wire or focused laser beam. Broadcast links need to use radio spectrum. As such point to point links can assume to always have the nominal bandwidth available, where shared mediums must divide the available bandwidth among all the nodes using that medium.

  4. The average distance covered by any link is 1KM. This is optimistic for regular 802.11, but perfectly possible with fixed point to point links such as directed wifi.

Let us first discuss the infeasibility of such a mesh network to support primarily real time communication, as the current Internet does. Assume, for the moment, that it takes one millisecond for a node to forward a packet. Given the assumed average link length of one kilometre the network would have a minimum round trip time of 45 milliseconds from one side of a moderately sized town to the other and back again. That isn't too bad and is slightly faster than most of the current Internet. However, most of the websites you visit aren't hosted in the same city. Most aren't even in the same province. Let us assume that you are in Calgary and wish to access a site in Vancouver. The driving distance is about 1000KM which we'll use since nobody is can afford to lay a hundred kilometre link. That means that a round trip would take two entire seconds. That isn't quite what most people consider real time.

If the problem was just latency we could all get used to it. Unfortunately we also have to deal with bandwidth. As mentioned in this paper the average number of nodes a random request must traverse is proportional to the square root of the total number of nodes in the network. This implies that the average available bandwidth for each node is the bandwidth of the link divided by the square root of the number of nodes. Take the 100Mbps assumption and scale the network to ten thousand nodes and the effective bandwidth for each node averages out to 1Mbps. Such a network would cover a small geographic area and 100Mbps is quite optimistic. More realistic may be an average link speed of 10Mbps which gives and average of 0.1Mbps, approximately 10KB/s, on a ten thousand node network. While there are certainly uses for such a network I'm not sure that people in urban areas would be willing to go back to dial up speeds.

However, if most of the network accesses can be made to traverse only a small number of nodes then the bandwidth and latency issues are dramatically reduced. This can be achieved simply by using a caching architecture where each node has a cache of the content which has passed through it. A request would then be serviced by the first node on the path which had the content cached. Unfortunately this means that real time access won't happen, but most of the web isn't real time anyways. There is still the latency problem from distant nodes in other cities or provinces with transfer of updated content taking hours. You won't be able to IM your friend in New York, but you could still email them.

So real time access isn't going to happen, that doesn't mean such a network wouldn't be useful. Local content would still be fast enough. So let us consider the requirements of setting such a mesh up. The biggest requirement for a mesh is to have the maximum possible aggregate bandwidth on each node. Since the primary factor in aggregate bandwidth of a node is the number of network mediums it uses obviously every node needs as many mediums as possible.

As explained in the assumptions there are two broad classes of mediums. Shared and private. Shared media have the benefits of being easy. All it takes is a device with a wifi card to join a shared media. Unfortunately shared media, while allowing many links between nodes, shares some fixed bandwidth among all those links and the links of other nodes using the same media. A fast 802.11n shared medium would probably only afford each link an average bandwidth of around 1Mbps unless the network was quiet. Private media, on the other hand, always provide good bandwidth to each link. Unfortunately it takes one private media for each link and additional effort for each link. A private Wifi link requires directional antennas be aimed and configuration between both ends of the link. Private media may also provide good range, such as the Free Space Optics mentioned above which are reliable out to 1.4KM.

Who has the energy and knowledge to setup numerous, or even one point to point link? Not normal users who just want to plug something into a wall and have it work. Just Geeks. I don't know that there is sufficient density of true geeks to put such a network together. You'd need at least one on every side of every apartment building and a couple on every block of houses. That's a lot of geeks talking to each other.

A successful mesh network could be built and would probably look like Freenet. Anybody want to confirm that the network assumptions are similar?

Writing a Hackish Profiler

When a program is too slow a competent developer will first look to faster hardware. When faster hardware isn't possible this developer will then reach for their profiler. Profilers help a developer determine why a program is slow so that it can be sped up. But what if you are on a platform with poor native support *coughANDROIDcough* which doesn't support the tracing profiler you need? Write your own of course!

Now writing an efficient, low overhead and functional profiler is an involved task which can take a significant amount of time. Since we are all busy instead we'll do an 80% job by writing an efficient and functional profiler, but skip the low overhead. This should allow us to get the numbers we require, but will cause the program to run much slower.

First we list our tools:

  • gcc -finstrument-functions causes gcc to call the __cyg_profile_func_enter and __cyg_profile_func_exit functions as part of the regular function call preamble and postamble.

  • A small script program, trace2text, which converts from the efficient binary format to a text format which is easily manipulated.

  • nm -al to help convert function addresses into symbol names and source line numbers.

  • c++filt to unmangle the symbol names.

  • A small script, join_symbols.py to do a relational join on the function address in the trace between the trace line and the nm line.

  • A bit of shell script and awk to transform binary trace files into something human readable and usable with code folding to explore the trace.

  • A final small script, avg_time.sh, to take a human readable trace and produce the average amount of time a call of a particular function requires.

With these few tools we can simulate a fair amount of the power of gprof, though with significantly more overhead.

For various reasons I will not be providing significant source to the above bits. One primary reason is that, being hackish, the profiler was tuned specifically to grab the statistics I required for my particular problem. Specifically I had a multi-threaded program which processed a number of packets coming in off a network. The program was neither IO nor CPU constrained, but yet was unable to keep up with the possible network data rate because processing each packet took too long. My job was to figure out why and fix it. Because of this situation I needed to determine which function were spending time blocked on some resource in the course of processing each packet. This is in contrast to simply being able to look at which functions are called the most or use the most CPU time over an entire run.

With this in mind I will lay out one way to implement the hackish profiler and hopefully I'll point out all the pitfalls you may run across. The starting point for any profiler is in accessing the data they require. Some profilers hook into the kernel to sample what the system is doing many times a second, others hook into the function call preamble of the program in question via compile time modification. I needed the latter on a system with no support for it. Conveniently the build chain uses gcc and gcc provides -finstrument-functions. This compile flag causes gcc to add calls to __cyg_profile_func_enter upon entering a function and __cyg_profile_func_exit upon exiting a function. Each of these functions are provided with two arguments, the address of the function just entered and the address of where the function was called from. These addresses are only approximate due to automatic Program Counter movement, but are a constant increment greater than the true address. In my particular case on the ARM architecture the function addresses where one greater in the argument than as produced by nm.

Gcc does not provide these functions, so you will have to write them yourself. In my case I was interested in having a trace for each thread separately and with a minimal overhead since we are already turning every function call into at least three function calls (one for the original call, one for func_enter and one for func_exit). To this end I created a simple structure which stored the function address, call address, time and whether the function was entering or exiting (For this last I borrowed a bit from the time). Each thread had a contiguous array of these entries and when the array was full it was flushed directly to a file for each thread. The size of this array must be chosen with care, too small and the cost of the write system call will result in inaccurate numbers being recorded, too large and each write call will stall for a significant time causing spikes in the recorded timing. I ended up using a guess of 10000 entries and it seemed to be accurate enough for my needs so I didn't experiment.

It is critically important that you ensure that you do not cause infinite recursion from these two functions. There are two ways to ensure this. The first is to realize that -finstrument-functions is a compile time option and therefore any function not compiled with this option is safe to call. This includes any system library functions. The second method is to tell gcc to exclude the function from being profiles. This is done with the no_instrument_function attribute. Do note that this attribute can only be supplied in the prototype of the function and not the function definition itself. As an example this is how you should define each of the profiling functions:

extern "C" void __attribute__((__no_instrument_function__)) __cyg_profile_func_enter(void *this_fn, void *call_site);

extern "C" void __cyg_profile_func_enter(void *this_fn, void *call_site){...}

You must supply any function you write with a similar attribute to avoid infinite recursion.

With these functions written and working you should end up, after a run of your program, with a number of binary trace files, one for each thread. You will need to convert this to a text format to easily make use of the existing tools such as nm and c++filt. I did this with a small C program which took each record and printed out a line with the equivalent data. Every record was space separated. I also found it convenient to use a stack and compute the elapsed time each function took at this point since I was already sequentially processing the entire trace. The most important thing to keep in mind at this point is that the addresses must be output in the exact format that nm outputs the addresses. In my case this was %08lx, but on 64-bit platforms it will likely be %016lx. Matching the format will make the relational join easy. You should also adjust the function addresses as necessary to match the addresses of nm. Again, in my case I had to subtract one from the address supplied to this_fn.

Now you need to convert those function addresses into symbols so you know what you are looking at. What you want to do is combine every line in the text trace with the matching line of the nm -al output on the profiled executable (make sure you run it on an executable with debug symbols). I originally did this using the join command, but this requires that all the inputs be sorted upon the join field and this proved to be too time consuming. Instead I wrote a short Python script which read in the nm output into a dictionary keyed on the address and then printed out the joined line for every line on standard input. This drastically cut down the time needed to process the traces into something usable since even a short run will produce hundreds of megabytes of text traces.

Once you have the joined output you merely need to convert it into the final usable form. The first step to this is to use awk to move the various fields into the order you desire. The second step of this is to pipe the entire trace through c++filt to demangle the function names. This will turn the mangled symbol names into full function or method names with the types of the arguments. These will contain spaces so the demangled symbol name should appear after any fields you with to easily process. In my case this was the total time the current invocation of the function took.

With this trace you are almost there and should be able to extract the just about any statistic you desire. You can also manually explore the traces. The easiest way I found to do this is to take my trusty programmer editor (vim in my case) to open the text traces. The traces will be large so you may wish to use dd to extract a subset to work with. This was specifically necessary in my case because the trace files were more than 4GB in size, which vim will not open successfully on a 32-bit platform. Once you have some subset of the trace loaded in your editor I recommend that you use the code folding features to make exploring the trace simpler. In my case I used vim's foldmethod=marker with foldmarker=Entering,Exiting to allow me to fold function calls. On a medium sized project it was enlightening to see just how deep the call stack went in some cases.

Now you can compute the statistics you need to find your problem. Statistics I found useful and easy to compute include just skimming a stream of elapsed time for a function in question to get a sense of how long it took. I also produced a small script which scanned the trace and calculated the average time a particular function took, how many times that function was called and the sum of all time that function took. Exploring will give you a sense of which functions seem to be taking all the time, but be sure to examine the statistics from a significant run to smooth out spikes caused by other sources, such as the overhead of the profiling.

Hopefully this will help somebody profile when no profiler exists. At the very least writing a hachish profiler as an exercise is educational as proper profilers perform many of the same actions, just in a more efficient manner with lower overhead.

Why I Prefer Duck Typing

Duck typing is a magical form of typing where an object is considered to be the correct type as long as it implements the methods or messages which a piece of code calls upon it. It is also the type system I prefer. The three most relevant examples of languages with Duck Typing are Smalltalk, Python and Objective-C. Smalltalk because while it may not be the language which introduced duck typing, it certainly brought it into the mainstream. Python because it is one of the most popular languages which have duck typing. Finally I mention Objective-C for two important reasons. The first is that Objective-C is a compiled language with duck typing while the majority of languages with duck typing are interpreted or run on a virtual machine. The second reason is that Objective-C is the only language I know which has typed duck typing. I believe this latter feature to be a great concept introduced by Objective-C.

Typed duck typing provides the flexibility of duck typing with the basic error checking capabilities of weak strong typing. Specifically every object has a type and this type is statically checked against the expected types passed into functions and methods. Additionally the methods provided by a type are also checked, to make sure you don't ask a List to do the tango. All this combined with a simple way to tell the compiler that you know what you are doing strikes a great balance between power and protection.

Below I've listed most of the reasons I prefer duck typing and especially prefer typed duck typing:

  1. You often write less code because you can avoid writing adaptor classes in many instances.

  2. Anytime you would otherwise have to use code generation(templates) simply to handle operating on different types duck typing allows the same compiled machine code to handle an infinite variety of types; the only requirements are that the objects respond to the correct methods. Thus you avoid code bloat and multiple code paths for data structures. No longer do you need different template instantiations for bignum-object and string-object dictionaries. In fact it is possible avoid having two dictionary objects all together and instead having one instance which supports the two different key types, even if the two types share no common superclass or formal interface.

  3. You don't even have to implement a method directly. Duck typing allows a method of last resort which receives the object equivalent of the call just made on the object to be handled algorithmically. This allows many things, two examples of which are fully transparent RPC calls on distributed objects and powerful ORMs with no code generation or manual code writing. Both of these are examples of the power of proxy objects.

  4. With typed duck typing we do not give up the aid the compiler provides in checking that we make no trivial mistakes such as using the wrong variable or pass in arguments of the incorrect type. That is, we do not give up the aid, but can trivially tell the compiler to trust the programmer.

  5. Formal mathematics has been proven to be inconsistent so why should we believe that the typing within our programs will be consistent? It is obvious that fully statically typed languages prevent a set of correct programs from being written. I find being prevented from writing those programs to be aggravating. There is usually a different, acceptable way of accomplishing the same goal, but they are most often significantly more work on the part of the programmer.

  6. Programmer will move heaven and earth, while swearing, to do what they want, even if what they want is wrong. It's better to make accomplishing the goal easy such that they may learn of and correct their mistake as quickly as possible.

It is for these reasons that I not only find duck typed languages more pleasant to work in, but also more productive. Interested parties will likely find it enlightening to sample what NeXT was doing in the early and mid nineties and compare that with the capabilities of other companies of the time. NeXT made extensive use of Objective-C before in many ways becoming Apple.

Design For Your User, Not Yourself

Frequently when designing something the designer comes across shortcuts. Little corners which can be cut to save significant effort on their part with seemingly reasonable restrictions passed onto the user. More often than not the designer takes these shortcuts. As a user I beg you to resist the temptation and put the effort in. You may not think that the user will notice, but they will. Any inconsistency or obviously unnecessary work on the users' part will be noticed, time and time again. Do I have to hit refresh instead of changes in one part of the program automatically being noticed in another? Must I trawl through all the settings before your application is usable? Any action which can be made simpler with no loss of generality must be done so.

It doesn't matter who the user is. Programming language designers should not bow to compiler writers, but instead only to the programmers. Application designers should not bow to the programmers but instead only to the users.

The recent drive to minimalism in software and hardware is not done because it is easier. It is done because it is difficult but valuable. With a minimal set of features it becomes possible to carefully consider the implementations of features and eliminate or reduce pain points. People don't like having to deal with issues not directly related to the task they wish to perform. Don't make them.

Manufacturing Defect

I dislike shopping online for two major reasons. The first is that my credit card number will get stolen and it's annoying to deal with, even if it ends up not costing me any money. The second is that I strongly prefer to examine what I buy before I pay for it. I examine it for manufacturing defects.

Before the factory everything was handmade and each piece was more or less unique. In this situation it obviously makes sense to investigate everything before you buy it to check for flaws. Later came factories and uniform products. Though this is still before my time I am led to assume that there was a time when you didn't have to check these products carefully because they were all mostly the same and each one had some expert attention to detail. Expert examination should weed out most of the minor problems which must result. Minor manufacturing defects, when fixed early, will cause no further trouble. However minor problems will eventually become major problems if ignored. One further distinction is that things used to be overbuilt and so could handle some defects without issue.

In the modern manufacturing world little of this still holds true. Products are not overbuilt, but instead use the minimum necessary materials of the minimum necessary strength in order to reduce cost. Further the pace of manufacturing has gone up. Where before every piece would receive some fine finishing work at the hands of a human, now humans barely touch the products at all.

This results in manufacturing defects. Not necessarily the kind that will cause the item to fail spectacularly, but of the nagging variety which makes the action slightly less smooth, or makes boot have a rubbing stop, or just generally reduces the lifespan of the item.

I have heard that in the Soviet Union no consumer appliances arrived in working order. Instead the family handyman would have to spend hours finishing the appliance until it worked. From then on the appliance would supposedly run forever. I sometimes wonder if we are slowly approaching this state of affairs, one small item at a time.

Cheaper at all costs is nickel and dimeing us.

Stack Machines

Recently I read Stack Computers: The New Wave and have spent a short time pondering what is contained therein. A quick summary of the technology of stack machines is that you have a CPU which doesn't have any programmer visible general purpose registers. Instead the programmer generally access to two stacks, a data stack and a return stack. All the arithmetic operations occur using data from the data stack and subroutine return addresses go onto the return stack. It is sometimes also possible to push arbitrary data onto the return stack to avoid accessing memory when something complicated needs to be done to the data stack. One example would be duplicating an entry arbitrarily deep in the data stack.

Stack machine architectures have a couple of interesting properties owing to the fact that the operands are always known in advance. The first is that you can direct the ALU to perform every possible operation before you have decoded the instruction and then simply pick the correct output. Also, since the top number of entries in the stacks are known in advance you can make use of blazingly fast register memory to avoid having to go out to memory. Also, since there is almost no state to save serving interrupts cost about as much as subroutine calls and subroutine calls cost on the order of one memory access. Finally, since most or all operations work on the top of the data stack the number of instructions is reduced, allowing smaller opcodes.

This latter is important because the linked book implies that stack machines are predominately limited by how fast instructions may be read from main memory. An additional interesting point made by the book is that stack machines may have less latency because you can't really effectively pipeline it deeply due to the constant data dependencies.

Obviously stack machines haven't taken over the world. I believe that the major reason for this is C. Though it is possible to compile C to operate on a stack machine, C makes some assumptions about how the stacks are laid out and how memory is accessed which requires that more work be done. Specifically C, or at least C code, assumes a single stack which contains locals and return addresses. Further, it is assumed that the locals may be access not only in any order, but that they exist in memory. As with register machines it is possible, in some cases, to have the optimizer figure out that the local is not accessed and need not exist in memory, but it is complicated.

Stack machines exist, more or less, solely to run FORTH, a stack based programming language. Unfortunately my limited understanding of FORTH and stack computing in general leads me to believe that getting good performance and code reuse, an aspect FORTH is famous for, requires a consistency of design throughout the system. Such a consistency almost requires starting from scratch and working in only tiny teams. Considering the FORTH style I believe that it would be trivial to produce an effective static object system and further that such a system would be, due to the extremely cheap subroutine call costs, well suited to the runtime optimized functions as used in Synthesis.

Stack machine friendly OO languages are perhaps an intriguing concept. Though it is easy to implement a dynamic OO on stack machines I am left wondering how well a dynamic OO with optional method parameters could perform.

Review: Other M

Last week Courteney bought Metroid: Other M and started playing through it. I've been watching her play it, sometimes playing from the backseat. Since I've watched the majority of the game I'm going to write a review of it in point. Without further ado here are the important points:

  • The graphics are quite good, sidling right up to the uncanny valley without beginning to decline. I would declare it to have the perfect amount of detail for standard definition televisions.

  • Samus is much more action oriented. The first person view is no longer the primary action view. Instead you spend most of the game viewing in third person. This allows various high-energy moves to be scripted, such as lunging finishing moves and action-roll dodges.

  • There's a plot! Whether this is good or bad depends on if you like knowing what is going on in the game or whether you just like exploring and killing anything that moves.

  • The Zero Suit Samus model is too busty and thin for the character. The model looks like an underfed lingerie model and does not have the athletic body her profession would imply.

  • *Spoiler* Nagubal, gur oynpx punenpgre qbrf abg npghnyyl qvr. Guvf vf tbbq orpnhfr ur'f gur zbfg yvxrnoyr fhccbeg punenpgre.

Overall I would rate this game as "Would watch played again". Given that the controls have been simplified I may even consider playing the game myself sometime.

Review: The Singularity is Near

Just recently I have just finished reading The Singularity is Near by Ray Kurzweil. The basic premise of the book is that information processing capability has been increasing exponentially since the beginning of life on Earth. Furthermore, this exponentially increasing rate of processing is going to increase, if not forever, for at least another two hundred years. By that point the human society, certainly not flesh and blood humans, will be masters of all matter reachable at half the speed of light.

Specifically, Ray Kurzweil posits that this will occur because of the Law of Accelerating Returns. The premise behind this law is that as progress marches on an ever greater proportion of every industry will be equivalent to information processing and thus able to take advantage of the exponential increases in capability.

Ever increasing mastery of genetics, nanotechnology, brain mapping and simulation, and Artificial Intelligence are noted as the keys to not only continuing the exponential path of information processing, but also to solving all the problems of humankind. Of these problems are specifically mentioned disease, poverty and death. The vast majority of the book is spent showing examples of how various important industries have historically taken advantage of the increases in information processing capabilities to progress further and then showing more examples of how technologies which are extremely primitive at the time of publishing, 2005, and are only ten or twenty years from production will ensure the continued exponential increase in information processing capabilities. A small portion of the book is devoted to debating, rather weakly in my opinion, about how it is a moral imperative to maximally develop these major technologies despite the serious innate risks.

The two primary arguments for the continued exponential increase in information processing capacity are a history of exponential increases and strong AI. The first argument is that because for all of history, recorded or otherwise, the information processing capability has been increasing exponentially it is reasonable to expect it to continue otherwise until some limit is hit suddenly. The strong AI argument predicts that before we hit the limit of human understanding we will have AI strong enough to design even smarter AI and computing machines, leading to continued exponential increases in processing capability per gram up to the theoretical limits of physics.

The historical data provided is convincing of the increasing power of hardware in terms of raw MIPS, however I believe that there is a flaw in the argument of Ray Kurzweil in that it is not raw MIPS which determines capability, but effective MIPS. Effective MIPS is the measurement of usable MIPS available after various efficiency losses are taken into account. These losses are significant and include computer architecture losses such as cache misses, communication losses such as synchronization, complexity management losses such as abstraction and innate problem limitations such as necessary serialization. Though the raw MIPS of hardware has been increasing exponentially the effect of these limitations have also been increasing exponentially. While I do not have any hard data on hand I would still agree that there is a net increase in the capability of hardware, but if this increase is exponential then at the very least the exponent is closer to a linear function than doubling every eighteen months as predicted by Moore's Law.

Though a decrease in the exponent does not entirely invalidate the argument it does drastically change the time scales involved according to the Law of Accelerating Returns. There is some evidence that the effective processing capability is not increasing as fast as Ray Kurzweil believed provided by the five years of hindsight since the book was published. Specifically the predictions of the available processing power for 2010 are off, already, by a factor of two or three.

The second argument, that we will have strong AI before we reach some limit is sound in the theoretical world. The limit which Ray Kurzweil eludes to is that of human creativity, that is that we will create strong AI before humans are unable to contain enough of the design of the necessary systems in our head to make forward progress. Whether the human mind is sufficiently capable to create strong AI before reaching our assisted limits or not is unclear. However, there are other limits which are not discussed in any depth which threaten much more than humans just not being smart enough. In chemistry there is the concept of an activation energy level. For many types of reactions if you plotted the energy of the reaction, positive for energy put in, negative for energy put out, you will see a bump just before the reaction starts to output energy. Unless sufficient energy is put in to crest the hill the desired reaction will not occur. There is a similar requirement with the development of any technology. Certainly a nuclear power station can generate trillions of Watt-hours of energy throughout its lifetime, but if you do not have the energy to build the plant in the first place you can never tap that energy, even if you already have all the necessary knowledge.

Similarly it is with information processing capability. Though we can create systems of ever greater processing capability, they will require ever greater energy to perform. In the book it is argued based on the theoretical minimums that this will not be a problem, but to achieve these minimum energy levels we require a minimum level of processing capability which we do not currently meet. Thus ever more energy will be required until we have reached the activation energy to enable low energy computing. It is much in doubt whether human industry will choose to support this level of energy in competition with the other demands on the finite energy generation capability. There is further the considerable possibility that the cheap energy provided by oil will run out before the processing capacity necessary for efficient computing is reached. Oil currently accounts for 37% of the world's energy production. The loss of this proportion of the energy supply will greatly exacerbate the competition information processing research faces and it seems likely that maintaining the current industries will take precedence over new research.

With the threat of insufficient energy supplies in the near future ultra-efficient computing may not come to fruition at all. Pushing back the timeline for sufficient processing capability due to a reduced effective rate of increase makes it more likely that the energy will run out before strong AI becomes a possibility.

Now that I have expressed my concerns relating to why I believe that the singularity may not come about at all it is important to express my reasoning behind why it should be avoided. My arguments are essentially that on the path to the Singularity lies the inevitable extinction of the human race. I will demonstrate this by referring to the destructive power of the major technologies Ray Kurzweil believes are necessary to power humanity to the Singularity. In the book Ray Kurzweil covers each of these threats and concludes that they are insufficient reasons to stop progress through two lines of reasoning.

The first line of reasoning is that these technologies hold the ability to reduce human suffering and it is thus morally required that these technologies be developed to reduce human suffering. This argument misses the point that there already exists the technology and capability to drastically reduce the aggregate human suffering in the world and that if the funds used to power technological progress were instead directed to making existing technologies cheaper, more reliable and in distributing these tools to those in need the majority of the human suffering in the world could be solved forever. It merely requires sacrifice. Though it is not obvious, I also believe that this argument, when used in the context of a specific technology, may lead to ignoring the unintended side effects of the new technology and thus cause further suffering. One major example of this is manufacturing automation. While the advantages of automated manufacturing are deemed quite valuable, cheaper goods, the reduction in the number of unskilled jobs and the resulting unemployment (It is not always possible or economical to retrain for the limited number of more skilled positions which are soon to be automated) are often ignored.

The second line of reasoning, that as long as we are sufficiently security conscious these technologies contain the necessary defencive tools, is based on two themes of invalid reasoning. The first is that we are currently dealing quite satisfactorily with the artificially created threats of computer viruses and their familiars. As counter evidence I present the Internet. Even with the best security software a person can buy a careless user will quickly have their computer infected with several virii and trojans and spam bots. Currently the only real payoff for the creators of this malicious software is money or personal information. A radical madman cannot effectively gain control of a significant number of critical military systems to be able to launch missiles. However a single crackes can easily amass a botnet of millions of nodes on the public Internet for the purposes of DDOSing or spamming. If we are unable to protect ourselves against quite limited malicious software should we attempt to allow malicious 'software' to take the much more potent form of a custom virus or nanobot swarms? With a greater effect on the real world the ideological payoffs increase greatly. Why limit yourself to getting the message out when you can devote a couple of years of your life to wipe out the heathens yourself?

The second invalid theme Ray Kurzweil invokes is quite surprising for a book which is all about the exponentially increasing power of technology. It is simply this, he makes the assumption that the new threats of custom virii and malicious nanobot swarms will always be of the same magnitude of existing diseases and threats. That is, that things which can wipe a nation off the map instantly are going to be restricted to large governments, as are nuclear weapons due to the cost of their creation, and that small threats will act pretty much as diseases do now and take sufficient time to spread that they can be detected and fought. Both these assumptions do not hold when it comes to powerful custom virii and malicious nanobot swarms.

The most critical flaw is in assuming that these tools do not provide the power to instantly destroy an entire population. The human immune system has evolved over millions of years to handle threats of the sort that exist in nature. The threats which exist in nature, on the other hand, have evolved to spread using the tools at their disposal. This means that diseases which kill too quickly are limited in their spread by the size of the village. Disease which don't spread or kill quickly enough give humans sufficient time to either defend themselves directly, or evolve at least a partial defence. Now imagine a custom virus which works like HIV, that is lays in wait for years before attacking and destroying the immune system, but instead of stopping at the immune system proceeds to destroy any tissue it has infected. Now imagine that it is transmitted like the common cold. Such a virus could kill the vast majority of the human race in a decade.

Consider further the nanobot case. Since the human immune system has never seen a nanobot it is likely ineffectual in defending against a nanobot infection. Let us further assume the best case of Ray Kurzweil's future by assuming that we have a nanobot immune system covering the Earth to prevent a grey goo scenario which is ten times more effective against novel nanobots than the human immune system is against novel viruses. Under such an assumption it is safe to further assume that at least sometimes such an immune system will fail. This is likely because the determined madman can just isolate a sample of the immune system and test thousands of nanobot/virus variants against this sample to determine a set which either strain the system's limits or sneak by entirely. If it only takes one nanobot swarm to convert a nation into goo or one virus to destroy a population then any failure is not an option.

It is further not a valid assumption that these destructive technologies will be restricted to large governments. The entire point of the singularity argument is that as progress moves on more and more of the processes of creation will be information processing based and the tools for that will become ever cheaper and wide spread. You may not be able to procure the necessary technology from your corner store, but you could certainly steal it from a University laboratory.

I believe it is clear that, given the range of mental reactions and states of all the people in the world, it is unavoidable that there will be numerous disasters resulting from maliciously designed viruses, bacteria (fast plastic eating bacteria anybody?) and nanobot swarm which will kill millions on a regular basis. Further I believe that the ideological bias which Kurzweil places in his book, that Neo Luddite beliefs are indefensible, is not near as clear cut as his flippant responses to valid concerns may have you believe.

Overall Ray Kurzweil in The Singularity is Near does a good job of playing the starry-eyed futurist, but fails to convince not only that the Singularity is likely to happen, but even that it is desirable to cause it to happen.

Art Appreciation

Art is captured emotion

That isn't quite right. It isn't like looking at art floods me with strange emotions of bygone era. In my experience I do not get new emotions from art. Instead art brings out the emotions from memories I already have. When I see a happy campfire scene I am not suddenly filled with happiness from some random source, I am full of the happiness I've felt during all the campfires I've sat around with friends. I would say more accurately that art triggers emotions which are already in you because you have already experienced them.

Art is captured sparks of emotion

This seems better and more accurate. But if art is merely the spark of emotion, then obviously there must be something already within the viewer to set aflame.

Some might argue that everybody may equally experience art since the emotion is always there, ready to be tapped. For is not every person able to experience the same range of emotions? While I must agree that every person is capable of experiencing the same range of emotions and must further agree that the emotions exist as part of the normal development, I disagree that everybody is equally able of experiencing art. It is not the emotion, but the connection between the emotion and the art stimulus which is critical to experiencing the full emotional power of art.

If we assume that these connections are important, then the world of art becomes clear. It becomes easy to understand why the great works of art fail to impress the general public. The general public no longer has the cultural connections to the art which existed when they were new. People no longer walk on the beach with suits and parasols. And yet the art buffs and critics have put forward the effort to understand the cultural connections so that they may experience the art.

It also becomes easy to see why older people enjoy and collect more art. The older you get the more life experience you earn, the more emotional connections you create. It is foolish to expect children to fully understand and experience the emotions implicit in a young boy and young girl holding hands and walking down a dirt road. And yet to seniors such a scene holds many powerful memories and emotions of first love and youth.

Finally, it becomes obvious why people create art. Art holds the power to bring to the forefront the emotions of a whole series of memories all at once. This creates a powerful and complex emotional mix. As we forget the negative memories the positive emotions stand out stronger.

The important message to understand with art is that even if the art doesn't change, it gets better with age and exposure. It is also important to not expect the young or inexperienced to truly appreciate art. To appreciate art requires life experience.

In Praise of Concise Books

Imagine yourself outside on a sunny day in the shade of a tree. You go to open the book you've brought along to read on this brilliant summer day. Perhaps you are reading a novel, perhaps you are studying a mathematical text; whatever you are reading, your are reading for pleasure. Make sure you have this image firmly in your mind before you move onto the next paragraph.

Take your mental image and look at the book you are reading. Is that book some heavy fifteen hundred page monstrosity or is it a nice light pocket book? Is it some middle ground? Unless you are a glutton for punishment I would expect that you are not choosing to read the immense book under the tree and that you are doing this for more than just weight reasons.

In the not too distant past books where, in general, shorter. This doesn't mean that they contained less, just that they were concise. This is in part because before the advent of mass production of books printing large books was expensive and before the advent of computers writing large volumes of text was difficult. Who wants to handwrite the equivalent of fifteen hundred typed pages several times over while producing the manuscript? Now these limitations on book size haven't been an issue for several decades, but as with everything with a cultural component there was a lag before large books were accepted.

Unfortunately large books are not only accepted where they are necessary, but the size of a book has become synonymous with the quality of that book. This is an unfortunate aspect of the bigger is better phenomenon. There is the additional aspect of laziness on the part of readers these days. It is generally expected that comprehending a passage should take only minimal mental effort and study. In the past this was not the and you are probably aware of the image of learned men pouring over small volumes for weeks at a time.

Through my education I've come to realize two important things about reading books. The first is that any idea can be explained in any number of words or symbols, from incredibly dense mathematical notations to long wordy chapters. Orthogonal to the number of words used to describe the concept is the mental effort and time required to understand the concept. Given identical amounts of context a concept requires an identical mental effort to understand irrespective of the density of the explanation. This is not to say that the number of words does not matter. Too few and much time is wasted deciphering. Too many and much time is wasted condensing.

The second thing I have learnt about books is that being concise and useful requires focus and skill on the part of the writer and patience on the part of the reader. A writer must resist the temptation to repeat themselves and the reader must understand this and have the discipline to start at the beginning.

It is truly the concise books which add the most value to our lives. The long fantasy epic may provide many hours of frantic reading, but the pocket novel provides a pleasant, relaxing read. Even more so the short story provides thoughtful entertainment in time to wait for the bus. The difference is even greater when it comes to scholastic texts. The immense tomes of science and mathematics are often more confusing and less suitable for in depth study then the slim, focused texts.

The more I look at the world the more I believe that less is the answer, not more.

Writers Block

Have you ever been in the situation that where you are either required or simply desire to write something, but just can't find anything to write about? If so, you've had one form of Writer's Block. It is really quite annoying, especially if you are trying to keep a consistent blogging pace (Though judging from the number of comments nobody can actually bring themselves to read what I write).

Well, this is where I find myself. I have no trouble finding words to write with, but the topic has been eluding me for a little over a week now. It's also not that I don't have anything to write about. I have a couple of topics which I will write about sooner than later, they just aren't done percolating yet.

So what to do then? Give up on writing to the world semi-regularly? Write about the inane things that happen in my life, such as the fact that I've recently seen a store in a mall which sells nothing but toilets. Or do I head into obscure topics which fall flat even with my geekiest friends? I'm really not sure and that's why you get a small, boring post on how I am unable to write anything interesting.

Buy Once Environmentalism

There are many forms of environmentalism which differ mostly on what they wish to save. You can save the forests, but you'll have to give up the oceans and rivers to fertilizer. You can save the fish, but will have to sacrifice immense areas of land to become garbage dumps. There really is no single solution.

I would like to promote the idea of buy once environmentalism. The theory is simple: buy an item as few times as possible. Do not buy a new computer every year, do not buy something you are already planning no throwing away. Instead buy durable and repairable goods.

The industrial production system requires large amounts of energy, materials and produces large amounts of waste and pollution for every object which is manufactured. The difference in environmental costs between a single well made item and a cheaply made alternative is not large. However, the well made car, fridge, computer, can opener will outlast several of the cheap equivalents. This is obviously a net win for the environment.

There is an old saying which is relevant: A rich man buys a pair of boots for $100 dollars and has dry, warm feet for a decade. A poor man buys a pair of boots for $50 and has wet, cold feet for six months.

Buy once environmentalism is not only environmentally sound, it is also cheaper. So buy once.

Identity

Identity used to be a simple concept. You were identified by who you were and where you lived, but since nearly everybody you knew lived with you that was inconsequential. This was the case when fire was the new thing.

Later on, with the invention of agriculture and specialization, identity became a little bit more complicated. In addition to who you are where you live and your profession became important. In fact the profession became so important that it became part of a person's name. No longer was John sufficient, instead it was John Smith of the village of Foo.

Though this is more complicated than the simplest form of identity, it isn't near as complicated as identity was about to become. As villages became towns and towns became cities the law increased in complexity and the concept of identity followed suit. At some point identity split into two component: legal identity and natural identity. Natural identity remained, for a time, at one's name and profession and source town. Legal identity also started out in this way, but quickly added proof of identification, such as a signature or wax seal.

From here identity only got more complicated. Natural identity expanded to include all the goods and services and distinctions which expanding wealth allowed. These components had always been there but, as with location in the beginning, so few people travelled far enough for the distinctions to become obvious. Natural identity has expanded until it has reached the current state where who you are to real people is a composite of what you look like, which music you listen to, what you drive, where you live, what you wear, your profession, your personality, your interests and those sorts of things.

Unfortunately the growth of legal identity is not so simple or clear cut. As time and technology and the complexity of the legal system increased the simple legal identity of a name and a signature became insufficient. Fraud became too prevalent and coordination between groups became necessary to the industrial legal system. It used to be that you would only deal with businesses within your local community. As long as you did this then your legal identity could be simple because it was closely tied to your natural identity. However, the industrialization of the legal system required a strict separation of these two forms of identity.

Consequently legal identity became the enormous, contradictory monstrosity it is today. A complex and fragile system of numbers and accounts spread across hundreds of organizations now defines a legal entity. This system groans under its own weight and complexity. It both stifles freedom by being inflexible and grants freedom through easy theft. The fragility of this system is the reason identity fraud is so simple and easy. There are too many numbers needed on part of the time to create new numbers tied to this amorphous legal identity but controlled by criminals.

There is hope however. We have today the tools necessary to reconstruct the legal identity system in a way which is simpler, more robust, more flexible and more secure. There are only three obstacles which stand in the way of public key cryptology from reforming legal identity. These are inertia, the legal system and the police state.

The latter is reason enough to prevent such a reform. Conveniently the former two are obstacles of sufficient strength as to likely be insurmountable. Sometimes imperfection is the correct solution.

TL;DR

Web 2.0 whippersnappers have a shrinking attention span. Complete thoughts get "tl;dr". Incomplete thoughts get broadcast as worthwhile. Long sentences, greater than 140 characters, are ignored. Soon we'll all talk like young children.

Makes me angry as I crave thoughtful discussion.

Discussion Forums

In the beginning there was the fire. People sat around the fire to cook and chat. Times were good. Some time later alcohol was discovered and alcoholic drinks devised. Some time later cam the public house. From this point on all forums of discussion have gone downhill.

Don't get me wrong. I'm not saying that more modern discussion forums don't have their advantages. It is undeniable that email allows discussion with more people than the pub and that instant messaging allows discussion when people are otherwise supposed to be working. However all the more modern forums lack at least one thing that the pub provides.

It may be useful to split this discussion into two parts: realtime and non-realtime. This distinction is important because it divides the people who may discuss into those who have little better to do or are discussing alongside some other task and those who are giving their full attention to the discussion. Of course this is a generalization, but a useful one. Also note that we will only be covering methods of holding a discussion, not conversations. Discussions are between more than two people.

The realtime discussion methods include: party telephone lines, IRC, party-line IM, radio. These can basically be divided into text only and voice only. Text only realtime discussions often have the problem that since people type much slower than they speak. It has also become the norm to supply text conversations as whole lines or sentences instead of letting recipients read as the sender types. This is done to make it less painful to read so you don't have to watch every typo being made. Old talk systems used to work this way though. Voice communications don't have this packet problem, but it does make it difficult to split the discussion off into subdiscussions become there is often only a single channel. Realtime voice communication has the additional benefits over realtime text communication of inflection. I'm sure everybody has experienced that offhand remark which was intended to be sarcastic and was taken as a personal attack.

Non-realtime systems have a bit more variety. One thing you don't see much of is non-realtime non-text communication. I believe that this is mostly because it is a pain to do and doesn't provide sufficient gain. It may also not occur simply because people haven't thought of it yet (Maybe it is time for the Web 2.0 Video forum?). This leaves mostly text based methods. Of these the most common are: usenet, email, web forums, BBS's, blogs, social networking sites and article comments. Now some of these have mostly fallen out of fashion in the past decade, such as usenet and BBS's, but they are all still in active use. Mostly these systems are divided into two major categories: messages come to you and you go to the message. Many of the former are the older systems, usenet and email for example. These are differentiated in that each user uses some software to connect to a server which contains all the messages waiting for them to read. These messages themselves originate on many systems from many people. The latter type of systems, where you go to the messages, are mostly the newer style systems. These include web forums, blogs, social networking sites and comments.

The major advantage of the systems where the messages come to you is that it takes less time to collect the messages and there is more flexibility in viewing them. If I can't stand reading a flatly threaded discussion with dozens of participants then I can use a threading client. However there are also those who can't stand threading. Using our own software gives us both the ability to view messages as we desire. It is difficult to explain the rise of the other form of mechanisms, where you need to go to the messages, except in the context of the increasing view that the entirety of the Internet is nothing more than the Web.

Now how do these modern methods compare? Well the current voice methods are expensive and time consuming, though broadcast only forms are starting to gain prominence. Specifically amateur podcasts and video blogs can be quite successful when they keep a tight focus. Blogs and comments tend to go together, but it may be argued that this is more the foil and the discussion. The major problem with comments is that they are too dispersed and activity in them dies out quickly. Social networking sites are a bit better in this regard, but they are not conducive to indepth discussions mostly due to cumbersome interfaces and in some cases length limits.

In many ways I find it difficult to top the capability of the old timers of the Internet: usenet, email and IRC. They have really covered the bases as far as I can see. IRC handles most of the realtime discussion needs. IM tries to be as effective in discussions, but tend to end up muddled and difficult to coordinate. Though it has fallen from the public eye I truly believe that no forum of discussion with random strangers has topped usenet. There are places to talk about any topic on usenet and the capabilities of modern readers far exceeds those of any web forum. There is also the fact that the discussions can be global in nature. Sometimes it's nice to discuss only with your friends, but you'll often hit limits as you discover that your friends either all agree on a topic or just don't care about some topic.

Finally we have email. People complain about email all the time. They don't like the SPAM, it isn't fast enough, it isn't pretty enough, etc. Yet for all this there has been no true competitor which has gained traction. There are no systems which are able to handle the volume while still providing quite good reliability (sure the message isn't guarranteed to arrive, but 99% of the time it does and email handled server outages pretty well). No other system does this while also allowing large and varying lists of reciptients and allow true offline capability with attachment. In fact, perhaps the greatest complaints about email come about because of terrible email readers (Webmail and Outlook are not good tools by any means) and the lack of authentication.

Authentication of email is an interesting problem. On the one hand a large part of the robustness and flexibility of email comes from its store-and-forward nature and yet a large part of its utility is the ability to send from nearly any server and email claiming to be from nearly any domain. That is, it is not only possible, but common for a business to handle moderate volumes of email without them hosting and maintaining their own mailserver. Furthermore, what mailserver they use is often inaccessible (for sending outgoing email) from their workstation. Instead they go through the office ISP's mailserver to send their mail. A system which didn't work this way could certainly be made to work, but at an increased cost and complexity.

One consequence of this is that there is no association of an email address and the sending computer. This results in a sizeable chunk of the SPAM. It is important to note that SPAM happens even when there is little doubt that the actual sender of the message is the authenticated owner of the account. Now there are solutions to this problem, but not many people use them. The best solution which I am aware of is PGP/GPG. These allow messages to be signed to have some guarantee that they have been sent by who claims to have sent them.

I believe that the only reason these have not really taken of is two fold. First the need is not so acute in most situations that it is worth any effort to rectify. This is becoming less the case as more and more is being done online, but is a reason nonetheless. The second reason, in my opinion, is that the proponents of these systems have been too zealous in achieving perfection right off the bat. Any tutorial you read will give you complicated rules of thumb with scary warnings in an attempt to have you construct a perfect and watertight web of trust such that this web of trust can be used to conduct the most confidential and important of business communications. This is really the wrong tactic to take. Instead they should simply promote the most basic use as a toehold. Instead of admonishing users to use a strong pass phrase and protect their private keys like they were priceless jewels they should recommend that, unless you have a need for further security, they use no passphrase and make sure that every computer account they send email from has a copy. These programs could help this along while still maintaining security if they tagged such private keys which have no passphrase in some special way. That way those who desire security will know not to put any trust in those unprotected keys.

In fact, if the big webmail providers automatically created a key and automatically signed every message it would increase the security of email in general by more than all the promotion of encryption software to date. Would this provide perfect security? Of course not, but some security is much better than no security.

Now back to discussions. There exist realtime discussions on IRC, though those have issues with long, in depth discussions because people are usually not focused solely on the discussion as they would in the pub. There exist email mailing lists, but they don't have the level of privacy that a pub provides. There also exist a small number of optionally encrypted mailing lists. There are perhaps the best fit for good discussions when you are unable to take it to the pub. Sure secure email doesn't have beer and doesn't have emotion, but it does have the participants focus.

Thus, in my search for valuable conversation I have created a mailing list. This list is a private mailing list, but notify me if you want to be added. This mailing list also supports encryption. This means that anything may be discussed there, safe from prying eyes. Hopefully is gets comes valuable discussion on any interesting topics.

Exceptions, Massive Concurrency and Pseudo-Serial Languages

Imagine, if you will, that you are in a world where the performance of a single processing core is no longer increasing. Instead every device has an ever increasing number of processing cores, each of limited capability. As I'm sure most of you realize we have nearly reached this point. Those of you who have been paying attention will also realize that the current common systems for allowing concurrent computing have limitations. Functional programming just never caught on, threads are difficult to get correct, pipelining can only soak up so much processing power and the other systems are more specialized with the consequent limitations. Research is being done into programming models which are not functional and not serial, but there is no convincing evidence yet that they will remove serial languages from the seat of domination.

So what can be done if we wish to stay within the confines of serial progamming languages. Obviously we cannot stay strictly within the serial paradigm. What we need is something which operates concurrently, but looks as if it is being done serially. Along this lines I believe that the most promising avenue are promises. Promises are basically delayed subroutine execution. When these subroutines are first called they return a token which is used to access the result of the computation. If the result is ready when it is accessed then processing continues as if it were always available. However, if the result is not ready then execution waits until the result is completed. During all this time the computation has been queued for execution and, ideally, completed.

Promises have an interesting interaction with the popular language feature: exceptions. Exceptions, as many of you are aware, are non-linear program execution constructs. Specifically an series of statements are checked for exceptions being raised. If an exception is raised then execution leaves the normal flow and enters an exception handler. Now this works, in modern languages, because there is the assumption that code executes serially. One necessarily makes the assumption that once all the checked code has been executed that no more exceptions may arise as a direct result of that code. It is possible that the results of that code was incorrect in some way and later code will react badly, but that is an indirect result.

Promises break this state of affairs, because the execution of all the statements is not guaranteed to occur before the primary flow of control has left the checked section of code. There are five differing methods of dealing with this.

  1. Pretend that the exception occurred when the primary thread was within the checked block. In this case you call the exception handler for the checked code section. The primary problem with this method is handling execution which has occurred after the checked section. The ideal case would be to enclose all computation within a transaction and roll back that computation. I haven't yet thought too much about those systems, but I have the feeling that these systems are necessarily equivalent to functional programming along with all the associated problems.

  2. Don't pretend that the exception occurred anywhere in particular. Instead raise the exception within any checked section which is in the primary thread whenever the promise happens to raise it. This has the obvious problems of being entirely useless because an exception may be raised at any time. There can be no assumptions made about the flow of execution. Only the top level exception handler of a thread would be generally useful.

  3. Pretend that the exception occurred at the point when the result of the promise was accessed. This loses much of the advantage of exceptions. Instead of having logically removed yet geographically associated handling of exceptional situations the programmer is required to handle exceptions when they are accessed. This really breaks the geographic association of error handling code with the highest useful level of processing which caused it. This variant of promise exception handling does have the advantage of making it obvious to the programmer what processing has been done and needs to be undone. The biggest problem with this is spreading the error handling code over a large volume of code, likely requiring multiple handlers be written for each user of the result.

    I suppose it may be possible to attach an exception handler to each promise, but then this is conceptually identical to the first option.

  4. Not have exceptions. This is perhaps the most radical option, mostly because exceptions have been seen as the greatest advancement of serial languages in the nineties. Now in some ways the end result of this choice is similar to the previous option. The error is communicated to the primary thread at the point the promise is accessed. The primary difference is that the error does not automatically bubble up the stack if there is no handler. Instead all errors must be handled as soon as the first access. This enforces more disciplined error checking (See how well that worked in C?), but does make handling error which must be passed up more cumbersome.

  5. The final way to handle exceptions is to not pass them to the primary thread at all. Instead there can be a, per promise, option on whether to retry the promise, cancel the primary thread or communicate with some error handling thread as to how to resolve the failure. This error handling thread could either provide a valid result to be used, cause the promise to retry, terminate the thread or maybe something more complicated such as rolling back a stack limited transaction. This latter option assumes that there exists some section of processing which is easily functional in nature and allows easy rollback.

The question is what should be done in a language where any call which is expected to take more than a microsecond is automatically promoted to a promise. Such a language may be able to take advantage of implicit, short term concurrency in code. Concurrency which is otherwise too expensive to exploit by more traditional means. However the exception problem must be resolved. Almost certainly the first two options are unacceptable. The first option results in the exception handlers being called in entirely unknown situations. Any amount of processing may have occurred before the result of the particular promise is accessed. The second option has a similar problem and the additional problem that exceptions become unusable to check and handle recoverable errors. This second option makes it impossible to use exceptions as flow control, which is convenient in certain situations.

Which of the three remaining promise safe exception models is the most useful, with the fewest gotchas? I'm not sure, but at the moment I am leaning in the direction of not having exceptions for recoverable errors, but instead having exceptions handled in a thread top-level exception handler for serious errors only. Errors such as IO failures, memory exhaustion or the like. I believe that such a compromise will allow exceptions for what they are best at, handling truly exceptional situations, while providing the least number of surprises in dealing with promises.

Thoughts?

Augmented Intelligence

This past weekend I read Accelerando by Charles Stross. The central theme of the book seems to be a story of human life in the time surrounding technical singularity. It also covers other topics which include: autonomous corporations, intelligence augmentation, capitalism in a post-scarcity world and immortality through consciousness upload. I find several of these interesting and may discuss them at length later. In the interest of expanding my reader base to those who don't believe in reading more than two paragraphs I'm going to comment only on the first couple of pages.

I highly recommend reading the first couple of pages and considering if you are enough of an information junkie to enjoy living like that. For those who aren't enough of an information junkie to have read that, basically the protagonist has reality augmentation glasses so he is constantly deluged with information. I couldn't live like that. That life seems so spastic.

Later on, after technology has progressed, the humanoids are branching and merging their consciousness to perform all the tasks the need. This includes, in one case, living several simulated months when meeting a new person. Oh how I have often wished I could do that.

The Economy According to Travis

Everybody should be given the opportunity create their own theory of economics. There are the common views of economics, the predominant views of capitalistic economics and several less popular theories which get media time every time there is a crisis which shows the failing of the current leading choice. This is one reason everybody should create their own theory of economics, the current ones aren't perfect. The other reason people should create their own theory is that the popular theories are popular not because they are more correct, but because they enable some to make money off gaming belief in the system itself. Here is my theory.

As I see it there are two sorts of activities one can partake within an economy: producing wealth and conserving wealth. Producing wealth always involves changing physical material from one shape or form to another, but it is important to note that not all manipulation of physical material creates wealth. Conserving wealth, on the other hand, does not necessarily involve physical manipulation. Conserving wealth is simply that, doing something more efficiently so that it requires less wealth.

It is critical never to confuse wealth and money. Money is a medium of transfer of wealth, it is worth something only so far as it can be exchanged for goods or services. Money is also not the only medium of exchange, you can buy thousands of dollars worth of skilled labour with beer, pizza and friendship. Money is also not an effective store of wealth; events within the economy make a fixed sum of money have a varying value over time. The current economic system favours decreasing the value of money as time progresses.

Now there are two sources of wealth: raw resource extraction and human labour. Obviously most activities involve some amount of labour, but in many cases it is quite minimal, such as selling software online. Conserving wealth is simply making some action require less wealth. This is most often saving labour, as in the case of many machines or improved engineering to reduce the amount of material required. The reason reducing labour is more common is because reducing the labour necessary to mine iron makes iron cheaper, which makes steel cheaper, which makes mining equipment cheaper, which makes iron require less labour. Cycles of this form and more complicated occur throughout the economy.

To clarify the distinction between producing wealth and conserving wealth some examples are warranted. What is traditionally considered resource extraction, logging or mining for example, produce wealth. On the other hand, nearly all of the 'white collar' careers produce little to no wealth, but instead conserve wealth. That is, lawyers don't create anything of value, but they reduce the costs of making agreements between parties with disjoint needs. Programmers tend to create little value, but tend to save significantly on the costs of communication and processing and, to a lesser degree, increase efficiency overall. In between these two extremes you have a range of production versus conservation values. Restaurants are a good example. Restaurants take some raw materials and produce a meal. The restaurant itself did not create or procure the raw food so they are not primarily producing wealth. Additionally the restaurant isn't entirely replaceable by making a meal at home because the meal is often better or cheaper or the venue is more relaxing or some combination. Thus a restaurant is a mix of producing wealth, improving the value of the raw food, and conserving wealth, saving the time of the patrons or wasting less food or providing some level of experience which would be difficult to do at home.

As may be evident in the previous example, entertainment is a wealth producing activity. It is a straight conversion of labour into wealth. Note, however, that mass-produced entertainment is primarily conserving. One is always able to spend their labour creating entertainment for themselves, but producing it and then providing it to others saves people from this task.

So now that we know what the two forms of activities are we should discuss how their value is determined. Wealth production is valued based upon supply and demand. This supply and demand is similar to, but distinct from, the common supply and demand mostly in that it does not produce a curve in two dimensions, but instead in three dimensions: supply, demand and value. While the physical constraints are loose, that is there are plenty of easy to access trees, then supply will meet demand at a nearly flat value. However, as supply becomes limited the value increases. While the demand remains high or continues to grow the value will increase. In the case of the two extremes, high demand with high supply or low demand with low supply, the value will remain high. This latter distinction from the more traditional definition occurs because of labour costs in switching professions, handling small orders and the like.

Conservation of wealth also has a value. However that value is limited to the value of real resources, labour or material, that can be saved. If a skilled builder can build a cabin in half the time and using half the wood then that is the upper limit on the value of his specialized labour. This is to draw attention to the distinction between specialized labour and general labour, the labour of the non-specialist. Or even the labour of the specialist in a field other than their speciality. As an example, anybody can paint walls, but that does not make them a painter and it is likely that the professional will do a better job quicker and cheaper in terms of wasted materials.

That is my theory of economics as it stands. It is not a complicated theory, but then again neither are the leading economic schools. What is complicated are the consequences.

So what are the consequences of this philosophy of economics? The first is that labour is expensive. So expensive, in fact, that the majority of the wealth of the world is spent avoiding labour at all costs. One need look no further than automobiles, trains, ships, electric generators, the telephone, computers or any communication mechanism. The bulk of technology has been created to avoid having to pay a person to walk or carry or think. The second is that the only constant profession is farming. Other resources will become limited and be abandoned for new materials, for every labour saving job there is some person trying to save the labour of that job. The world will continuously strive to do more with fewer people while at the same time increasing the number of people. Wages will continually fall in real terms because, although labour is expensive, anybody will sell themselves cheap when starvation is the alternative.

Also, programmers will continue to try program everybody out of a job. I believe that one day they will succeed. When this comes to pass let us hope we have all joined the leisure class.

Survivability and Population Density

While by no means the most numerous and certainly not the most physically imposing, the human race is the single most powerful force on Earth. We've done it through the use, and dependence upon, technology. One need only read my previous post, featured right below this one, to see that there are challenges ahead caused by this very technology. The greatest threats are the ones which hold the possibility for a short disruption of the economy.

Now how does this relate to population density? That's easy. The more dense the population the smaller the area of ecological disruption and the fewer resources which are necessary to put toward transportation of goods around the economy. However, as part of this the greater the population density the greater the dependence upon the smooth operation of the economy to ensure that the people living in these areas get the goods they need to survive. Conversely, the lower the population density the greater the proportion of resources which need to be put forth to transportation in exchange for a lesser dependence upon a smooth running economy.

To see this in action consider water. In a high density city water is supplied by a small army of technicians which ensure that it is cleaned, filtered, pumped and delivered. In most rural areas each household has their own well to supply their needs. Now on a per capita basis the city people are less ecologically damaging, but if the economy has a hiccup then the water may very well stop flowing.

Next consider electricity. The vast majority of people have electricity delivered to them by the massive machine that is the economy. However, then the power goes out in the city large areas quickly become inhospitable. All those sealed office towers quickly gain tropical climates without constant air conditioning. The upper floors of skyscrapers become accessible only to the most athletic. Many places become too dark to move around in when the lights go out. Similar things happen in lightly populated areas, but there is also significantly less dependence on those areas. If there are no lights you can move to a window. If it is too hot you can let air in. Nowhere will it be necessary to climb twenty flights of stairs.

Perhaps most importantly is the availability of natural resources to make up for economic shortfalls. In a city there is only a small amount of burnable material on a per capita basis. In the woods there is plenty.

This is all important because there is no perfect solution. Living in higher density areas will reduce the environmental impacts of economic activity, but increases the sensitivity to disruptions caused by ecology and other factors. As such cities help prevent economic disasters, but are much more sensitive to them. Living on a farm makes it easy to support yourself, but, unless you are farming, you will be spending significant amounts of time and fuel commuting.

In the middle we have the suburb. In any civilization ending disasters the suburbs is where you want to end up after the majority of the human population has died off. They are not as good as farmland to support yourself, but will have ample precut firewood (furniture), shelter and other leftover bits of technology (bikes, lights, pots, etc.). Unfortunately suburban living is nearly expensive, in terms of resources per capita, as farmland, but is just as dependent on the economy as the cities.

This is the core of the problem of choosing the correct population density for survivability. Choose high density for prevention at the expense of greater damages. Choose low density for lesser damages at the expense of lesser prevention.

The End of the World

Every age has its various challenges, naysayers and preachers of the coming doom. The present is no different. For those who are not on top of the current list of issues I summarize them below. As in all cases forewarned could lead you to become forearmed.

I present these threats to civilization in order of exacerbation. That is latter items, should they come to pass, are likely to exacerbate the difficulties caused by some of the former items.

  1. Solar Superstorm. The Sun has solar storms which can increase the strength of the solar flares and solar winds. These, in turn, touch the Earth's magnetic volume. Apparently once every couple hundred of years the Sun has a very strong storm which causes extreme activity around the Earth. The effects include the Aurora Borealis being visible in New York city, long distance powerlines being charged with large inductive currents and general dysfunction of the electrical grid and radio spectrum. Watch out for blown transformers and wide spread, long duration power outages.

  2. Emptying of the Seas. Man has fished the seas of the world for his dinner since the beginning of civilization. However in recent decades the demands placed on the oceans have been steadily increasing. This has led to overfishing and the collapse of various fish stocks, such as the Cod. There is little sign of this slowing as the fishing industry continues to move to less desirable species to fill their holds. Watch for seafood becoming an unaffordable treat.

  3. Agricultural breakdown. With the rise in industrial agriculture and the decline of the family farm the tendency of farming has been towards maximizing production per acre at all costs. This has meant pesticides, chemical fertilizer, engineered crops, immense irrigation projects and minimization of fallow land. Industry has optimized agriculture and as with all optimized systems any unexpected change can bring the system down.

    Add to the current demand the increasing demand for Western style diets full of variety and animal products in China and India and you have powerful pressure to increase production. This will lead to further optimization and industrial farming on marginal lands. Watch for shortages and increased prices.

  4. Empty Aquifers. Perhaps the single greatest invention in the history of man has been irrigation. Irrigation freed farmer from dependence on rain. It has allowed marginal land to feed the world. Unfortunately much of this water, especially in the plains of the world, comes from underground aquifers. Though these aquifers do replenish over time, they do so at a lesser rate than we are currently emptying them for irrigation. It is expected that in the near future some of the aquifers in the breadbaskets of the world will become empty. Watch for failing crops in the heartland of the USA and reduced production from southern California as two of the first difficulties.

  5. Aging global population. No matter how hard people try, they can't stop getting older. Couple this with a declining birthrate and you end up with a population who's average age increases with every passing year. Civilization has not seen a population where more than half of the living people are retired. This brings with it several challenges. The most difficult of which is economic. Will the economy function when the half the population does little productive work? Where will the money comes from for pensions, health care and education?

    For the young this is a great opportunity because labour and skills will be a seller's market. For the old it is less clear. Those who have saved nearly nothing will find themselves working until they die. Those who thought they had saved enough will find every service suddenly more expensive than they predicted. Those who invested in real estate may find prices dropping as retirees sell en mass to pay for their retirement. The infirm will perhaps be hardest hit as there won't be enough people of working age to hire a sufficient number of nurses and other aides.

    Watch for stiff competition for workers causing wage increases and inflation and labour shortages all around.

  6. Peak Oil. Peak oil has been touted since the seventies as coming any year now. It has not come yet, but we are thirty years closer. Modern civilization is brutally dependent on oil. It is used in the manufacture of fertilizer, plastic and machinery of all sorts. Some food is even made from it. Oil runs our transportation systems and some of our electrical grids.

    Peak oil is not running out of oil. Peak oil occurs when the production of oil, how much is pumped out the ground, cannot be further increased to keep up with demand. The industrialization and westernization of China and India are rapidly increasing demand. When peak oil has arrived the essential structure of the economy, both globally and locally, will change. Watch out for the end of air travel, increased prices across the board and the rebirth of rail.

  7. Climate change. Though there has been much talk in the media about climate change, there has been little discussion of what the cause of the real difficulties is. All the difficulties caused by climate change, whether it is hospitable deserts becoming inhospitable or rising sea levels or more powerful winter storms, are only problems because they are change. Civilization has optimized its functioning on certain assumptions of climate. It has also proven that is can withstand fierce storms which occur on a frequent basis, just look at places hit with hurricanes every year.

    The difficulty comes mostly from holding out through the adjustment period. Some lands will require evacuation. Others will become more productive. Watch for mass moves and general upset of all the components of civilization. Especially watch out for the newly impoverished who have been forced to abandon their lands, possessions and ways of life.

  8. Peak Energy. Similar to peak oil this is when the production of energy, mostly electrical energy, cannot keep up with demand. Most of the good energy sources are near capacity. There are few good valleys left to damn, nuclear has political problems and renewable sources require decades of heavy research. Watch for drastically rising electricity prices, the return of battery powered appliances and rolling brownouts.

  9. Good for nothing young people. Since the beginning of time older people have been claiming the next generations are of a lesser sort. This time is no different. As in ages before the young people of today are: disrespectful, blasphemous (both religiously and ideologically), lazy, lacking in vision and generally good for nothing. Of course most of that has come about because the have been pampered compared to their parents. However, there exists the seed of a will to live and succeed inside every person, no matter how useless they appear. Watch for changing definitions of success, intergenerational struggle and, ultimately, adaptation to the new world.

  10. Revenge of the exponential. Since the beginning of the Industrial Revolution the global economy has been growing exponentially. The exponential function is perhaps the most misunderstood function and has the greatest impact on the life of the common person. Compound interest, inflation, population growth and resource consumption are all examples of things which have been growing exponentially for decades or centuries. When the exponential system meets up with bounded physical limits there will be trouble. Watch out for painful economic restructuring likely resulting in all existing virtual asset investments being wiped out.

And those are the major issues, as far as I am aware, which will come to the forefront in the next twenty years and should be well into full swing withing the next hundred. The outlook may appear grim, but the costs of persevering are not insurmountable. All it takes is a strong will, sacrifice and ingenuity.

Why You Arent a Cop

As you may be aware the RCMP has been on a big recruitment push for the past couple of years. According to this source part of the reason for this is because there was a big recruitment push in the seventies. Well, those recruits are getting ready to retire. That article also alludes to reduced interest in the profession. Now it is not only the RCMP which has this recruitment problem, municipal forces have a similar problem to a lesser degree, but the RCMP has it the worst mostly because they offer lower wages and remote posts.

To understand the core reason nobody wants to be a police officer you need to first understand that, to most of the public, a cop is a cop. To most there is no relevant distinction between the RCMP, the Vancouver Police Department, the LAPD or bylaw officers. This is important because I firmly believe that policing suffers from a serious negative image. Do not forget, however, that though an image may not be entirely accurate they tend to be more representational than fictional.

In the past, police where firmly viewed as upstanding members of the community. They were friendly when doing their rounds and helped people. More often than not they were a voice of reasonable authority who could be trusted to act in the interest of the community. This is significantly less true today.

Today the direct interaction people have with police tends to be restricted to getting a ticket for something they consider perfectly reasonable. This is the single greatest problem of the modern police force. No longer do people consider the police friendly and reasonable, but instead they are viewed as speed bumps in the course of living a reasonable life. Worse than this limited interaction is the relative increase in surveillance caused by the increase in population density and unmarked cars.

It used to be you could drive ten minutes from home and be in a rural area where there was little fear of being caught. If you wanted to drink in a field you did so. If you wanted to do a bit of drag racing you did that too. Mostly nobody got hurt. If you did that now you would often find that the cops would show up, either because the are cruising around, or because somebody called them. If your fun if often ruined by police you won't think much of them.

Then there is the recent increase in unmarked cars. So called ghost cars have a legitimate purpose in formal, undercover investigations. However, I have noticed many such cars pulling people over. Such actions can be construed as the first steps to a police state and are definitely not friendly. If police hide their presence to the community at large they should expect to be treated like any other group which hides their job from the public, namely criminals.

Those are the reasons people have a negative direct experiences with police. If the direct interactions are negative, the indirect interactions are downright terrible. As noted above to the public all police and all police forces are the same. This means that anytime a person reads a story of views a video of police brutality, corruption, use of unnecessary force, unreasonable TASER use, speed traps or police provocateurs they see all police as untrustworthy.

Police have a serious image problem as being against freedom and the public. It is no wonder that recruitment is down. A much more severe problem, which I have not seen addressed, is the self-reinforcing nature of the problem. If fewer good people wish to become cops there will be fewer good cops. With fewer good cops the impression will tend to be more negative. Not all these problems are the fault of the various police forces, if the politicians demand that there be no tolerance, then there will be no tolerance. Many of these problems have solutions within reach of these forces, they just need to start serving the entire community again.

The Three Stages of Success

When a profit seeking venture unleashes an innovative product the world is full of possibility. The first time a significantly more reliable car rolls of the production line, the first time a new search algorithm is used by the public or the first time that AI makes a stock trade the world changes. Suddenly the life of some is better. This is the first stage, taking over the world.

Then comes success. This is when everybody wants this new car, uses this new search engine or that AI starts making money. The product is on top of the world. The venture is profiting handsomely from the growth. This is the second stage of success, profit.

If the product didn't exist in the real world this would be the end of the story. The growth, by definition an exponential process, would continue indefinitely. Most people believe they live in in this ideal world and they base their actions on this assumption. It is for this reason that nearly every venture fumbles in the third stage. The real world has limits, there are only so many people to buy cars, only so many searches only so much money.

The third stage is dealing with the fact that the world has been taken over. Everybody owns a car which is equally reliable and efficient, most of the searches are done using this new algorithm, the AI now controls a significant portion of the worlds' money. The growth doesn't necessarily end at this point, but the rules of the game have changed. Once everybody owns a car far fewer need to be made, but making them less reliable means fewer older vehicles will be replaced. When most of the searches are done using the new algorithm creators of web pages include less metadata explicitly for the computer. Why add numerous links when you are just a search away? When the AI is a major force in the economy the rules of that economy change, old heuristics become invalid and new ones appear.

Those who fail to plan for the third stage are doomed to decay. This decay may be slow because of the massive reserves gained in the second stage, but decay of the product is unavoidable. Those who plan will simply silently replace the product with another new one.

Planning too far into the future is a waste of energy, but in the life of a product (1. World Domination 2. Profit 3. Saturation) it is important to take the next stage into consideration.

If I Had a Majority

If I were the leader and had a majority in the House of Commons there would be some changes. The first and most significant change would be my goal to be a one term wonder. I believe that the largest source of dysfunction in any political system are career politicians. If you are worried about re-election then you are unable to focus on solving problems. If you don't solve problems effectively then you need to worry about being re-elected. It is a vicious cycle. The only way out is to not consider re-election an option. Aiming to not be re-elected has the added advantage that enemies can be garnered without fear. The ever present enemies of solution is slightly depressing.

I start by not wanting to be re-elected and only needing to conserve my political capital for the length of one term. This shouldn't be that difficult. The Conservatives seemingly haven't had any political capital for three elections. What do I do with my improbable power and freedom? Make things better.

I start by moving the age of consent back to where it was ten years ago. Nothing good has come from coddling children and nothing but bad has come from the extension of childhood.

Along the same lines I'd reduce the Federal minimum drinking age to fourteen. So many teenagers already drink from this age that it isn't effective. Even worse, they are forced to do in it secret. Things done secretly are never done as safely nor with as much moderation. Reducing the drinking age has the added benefits of pushing the frequent binging to a time when they have less money and are, for the most part, unable to drive. Nothing makes it easier to avoid driving drunk than having lots of practice.

Following this line of thought I would make drinking in public legal. Plenty of other countries allow drinking in public and their society hasn't fallen apart. Again being forced to secretly drink in public worsens the situation. When you can't bring beer you are forced into the water bottle filled with vodka. Public intoxication will still be illegal.

Next on the agenda comes automotive fuel efficiency. I am not entirely certain of the rules at the moment, but they certainly do not please me. I would set the minimum combined city/highway fuel mileage to 30 MPG for cars and 20 MPG for light trucks. These minimums would be increased by 3% each year until such time that liquid petroleum fuels are no longer used.

Having set personal transportation toward improvement I would turn my eye toward public transit. The Federal government is unable to do much directly, however I would pressure the appropriate governments to not fund transit via taxes raised of automobile use. It is counterproductive to have transit funded by gasoline, parking, insurance and other taxes. How can an effective transit system be developed in any region if every improvement and ridership increase reduces funding for that same system?

While there are happening I would move forward with policies to decentralise the Federal Government offices. At the moment a large majority of the Federal Government is housed in the area of Ottawa. This made sense when most of the information required to run the government was on paper and the postal system was slower. In the present, however, most communication is done using digital documents. Having all the offices in one geographic area is an inefficiency. How many hours are wasted each day because of traffic in Ottawa? How many processing delays exist because all the work is done in a single timezone? How much economic harm is done by funnelling so money into southern Ontario at the expense of the outlying regions? Much better is distributing the work.

This work I would distribute to small towns all around the country. Putting these new distributed jobs into large cities will, in time, recreate the centralisation problem we already have. Small towns have the additional benefit of likely requiring lower wages due to lower costs of living. A prime candidate for towns to place government offices are towns in the North. Many people in the North are unemployed seasonally and having consistent jobs will reduce unemployment rolls. The North is all wired and it doesn't matter how bad the winter storm is, you can still make it across the town of one or two thousand to the office just a handful of kilometres away.

In moving the government offices I've reduced poverty in small, remote towns and the unemployment rolls. I've possibly even saved money in the long run. How else can I improve the countryside? Infrastructure is how. Specifically building or upgrading rail/highway/fiber links between communities. Of these three the least important is the highway. I propose to help pay for this infrastructure through hiring the unemployed in the areas. Additionally I propose an opt-in programme for prison inmates where they will agree to work in a labour camp to work off their term a third faster. Inmates may be paid, but it will be well below minimum wage. Having inmates work should ease the burden on the prison system. Nothing reforms a person like five or ten years of hard labour.

Speaking of criminals I would immediately shutdown the long gun registry. It is immensely expensive and provides no benefit. Criminals just don't use hunting guns.

Relatedly I would rewrite the way corporate fines are computed. Instead of whatever system there is in place now I would institute a statistically rational punishment scale. It works like this: First take the maximum amount of money breaking this law may have saved the corporation and quadruple it. To this add twice the cost of all cleanup and restitution. This is the cost of any fine levied. Should this fine not be paid all senior management goes to jail and the assets of the company seized for government auction. Things should settle down a bit after the first multi-billion dollar fines are handed out.

Copyright. Copyright from the government's point of view used to be easy. Only those with lots of money and corporate backing cared or produced valuable content. With the rise of computers this is no longer true. I would push an updated bill biased to the consumer. Overly large media conglomerates already have more than sufficient power in the form of Loonies.

I have in mind many other things which I may do, but I am not yet entirely convinced about all of them. The final thing I would do as Prime Minister, before being run out of office by those who hate the Canadian people, is do my best to diversify Canadian trade away from the USA. I just don't feel that the USA is reliable enough.

Lessons at Christmas

The best thing about the holiday season is the abundance of baked goods and other treats. The second best thing are the long periods of time off and the distractions from regular life. Mostly these are family from out of town or feasts. This year the calendar was arranged well. Even though I have on vacation time I ended up with a four day weekend. During these four days I discovered, or perhaps rediscovered, some unsurprising facts.

The first and perhaps most useful of there is that computers are the cause of all the ills in my life. During the break I barely touched a computer at all and I was happy. It is a unfortunate that while computers seem to be the source of all the frustration in my life they are also the source of all its necessities. I am not yet sure what I will do about this rediscovery. Perhaps I'll examine and fix the aggravating factors. Perhaps I'll not touch computers as a hobby. Perhaps I'll try to make my living some other way such that I can still enjoy computers.

The second of the things that I discovered is that happiness is spending a lazy Sunday with a fresh book on a comfortable couch. I have been quite busy the past couple of years and haven't have the chance to read for pleasure much. In the past few of weeks I have found time to read four books among my other chores. I believe that I will attempt to keep this up, though perhaps at a more sustainable level. Reading until two in the morning before work is not necessarily a positive thing.

Rediscovering reading has also clarified the direction my media consumption has been heading in since I left for my trip. I have, for nearly a year now, been avoiding visual media of all kinds. I have been avoiding TV, movies, pictures and everything on the Internet which isn't predominately text. I have instead been reading, listening to podcasts and talk radio. I never listened to much music and am listening to even less now. I find these more restrained and thoughtful media reduces my stress level.

This leads me to a blog entry I had considered writing, but I never got around to and I will summarize here. After having listened to two CBC podcasts, one the Ideas programme of a newspaper speech and the other a Rewind series of podcasts concerning the history of Public Relations, it occurred to me what the future of the news industry may be. The primary issue with news today is that they mostly print press releases, for various reasons. This, coupled with the ever increasing number of minutes of news reporting, has led to news becoming a constant stream of informationless data. The only future I see for news is for the industry to drastically increase its signal to noise ratio, starting with a stiff cut in the amount of data output. Newspapers will not become extinct, but instead return to weekly printings of news, not just reworded press releases, stock prices and sport scores.

The final discovery comes about because I watched Avatar in 3D. I am impressed by how far technology has come in crossing the uncanny valley since Final Fantasy. There are only a few spots where the unrealism is jarring. 3D movies, however, I feel don't add enough and is only a gimmick. One thing these film makers need to learn is that with a 3D movie you can't direct the attention of the audience through the use of focus. The entire volume must be in focus at all times. Doing otherwise gives viewers who want to look at the scenery eye strain. I know it'll be a while before I watch another 3D movie.

Information Organization

Information has come up a couple of times among my friends in the past short while (see here, here and here). The solutions and problems discussed seem to revolve around ignoring you aren't interested in to make more time for those which you are. Apike makes the suggestion of ignoring aggregators in preference for primary sources. Curtis just wants to read everything without setting his brain on fire.

My view on this whole debate is different. I avoid information overload in three ways: filtering aggregation, categorization and prioritization and finally quick filtering. Through the application of these three techniques I learn about everything important without spending my life reading.

First we have filtering aggregation. This is getting most of my transient information, that is news and gossip, from other people. People who filter the Internet for me. In this class I read Slashdot, certain sub-Reddits and a few other sources. I fully realize that I don't see every little piece of news on the latest gadget that I'll never see, but I am OK with that. When picking aggregators it is important to keep in mind your general interests and acceptable volume. I find that older sites tend to do better at filtering the useful from the inane and transient. This is likely because being older it has attracted an older crowd which has learnt the lesson that you can't know everything. Volume is critical, no site which ever has more than twenty-five items a day should be considered, especially when viewing the constant stream.

It is important to distinguish two separate ways in which you can use an aggregator. The first, which is the most common now in the days of RSS, is to take every article posted and use that as the articles to read. The second, which has fallen largely out of favour, is to take snapshots of the article lists once a day. This latter method works especially well with aggregators who keep popular links up longer. I find that Reddit is best read this way and I only read the top page once a day of each sub-Reddit I follow.

The second technique is categorization and prioritization. Most of the articles and content on the Internet loses value incredibly quickly. An article which is inspiring and groundbreaking on day is valueless by the next week. It thus becomes easier to pick out only the most valuable information the older the articles are. This lets us make use of the powerful tool of prioritization. By prioritizing the articles with respect to relevance we ensure that we only read what we have time for. Any articles which don't fit within my fifteen minute morning reading session are left until I have more time. Later when I get to reading it time has passed and articles of lesser importance have accumulated. This provides two ways of making our lives easier. The first is that the articles are older and so of lesser value. This means that more of them will fall below the minimum value threshold and not be read. The second important fact is that there are more of them. When a topic has a sudden burst of articles it likely means that something of interest has occurred in relation to that topic. That topic becomes worth reading. The other topics which have few articles concerning them are less likely to be interesting. The third advantage is that if you wait then most of the interesting comments will have occurred in the interesting articles. Do not underestimate the power of interested and knowledgeable commentors to provide value to an article. In fact, I often read Slashdot articles solely for the (filtered) comments.

Categorization and prioritization is all about delaying. With the addition of an extra twelve hours it quickly becomes obvious which topics had something interesting happen and which didn't.

The final tool is quick filtering. This is the only technical aspect of my strategy. In dealing with large volumes of information, most of which is of little value, it is important that filtering be a quick, efficient process. The first requirement is that whatever software you are using be quick. Any delay in processing your commands is unacceptable. If you use a web based RSS reader I would recommend you try some other reader. Ideally the delay between you skipping this article to the next article appearing would be 20 milliseconds or less. If you can feel a delay it is too slow.

It is also important to minimize the information you, the slow human, use to filter upon. Ideally you will filter upon only the text in the subject. There should be no date (does it really matter if it happened today or last week?), no author (generally more interesting authors should be in a higher priority group) and certainly no body text. Making reading decisions solely on the quality of part of the subject may seem harsh, but there the basic fact is that an author who cannot write a concise, interesting subject are unlikely to have written anything truly original or interesting. Filtering by subject line also eases the constant delays of refocussing your eyes or having your computer process your command. If you can ignore a full screen of articles with a single keyboard command then it matters less that it takes your software one hundred milliseconds to switch to the next screen.

That is the strategy, don't look at what you can avoid seeing, don't read now what can be left until later and be able to quickly ignore anything which got past the previous filters as quickly as possible. With that you'll be reading significantly less than you ever imagined possible and not missing out on anything truly ground breaking. Remember, people on the Internet are like a flock of birds, anything truly interesting will set them off into a repetitive flurry that lasts two or three days. It doesn't matter if it is the original article or the hundredth response to it which catches your attention, you can always follow the links back to the source.

There is one more factor to take into consideration. For the health of the Internet it is not possible to only read heavily filtered lists of articles. If everybody wanted to read only the best one percent there would be nobody to read through the remaining chaff to find the gems. You should pick your favourite topic and read that topic unfiltered, keeping in mind to pass along articles which others would find interesting. You only need read the level to which your interest and time allow. If you can read all the source articles, go right ahead, but remember that people are needed at every filtering level to ensure that only the most relevant and important articles make it to the top.

In fact, I'd probably pay somebody to provide me a list of the twenty most important articles from yesterday.

Lessons

What did we learn from my last entry? Firstly that my initial computed layout wasn't very good.

The other thing I learnt is that it is impossible to keep a coherent thought when it takes the majority of one's mental power just in the struggle to put those thoughts down.

I won't be blogging much until I pick and get good at my new keyboard layout.

Feature Creep as a Human Phenomenon

Warning! The following is an incomplete thought and will be full of errors.

Feature creep is a term which describes continual addition of features to a piece of software. This occurs even though the software already meets the needs of the vast majority of its users. Feature creep is detrimental because each feature makes the program as a whole slower and require more resources. The end result of feature creep is the eventual collapse of the software or the creation of new software which serves the same purpose. In general software grow without bound until it is too cumbersome to update and too cumbersome to use, at which point it is dropped for some other software.

In our increasingly computerized world it is bad enough that all the software we create eventually becomes too complicated to operate. Unfortunately feature creep is not restricted to software, it merely advances fastest these. No, feature creep stems from Western society and can be seen in all its aspects from engineering, to law, to education, to the very norms which create the society itself. This creep is not known as feature creep, but instead by the more positive sounding progress.

Examples of feature creep are easily found in life. Take phone which now surfs the web, plays games, organizes your time, takes photos, watches movies, does email and a host of other things. The scope of a phone has broadened. Now some of these features are useful to some people sometimes. Yet even those features a person doesn't use costs. Those who do not need colour still pay for the battery cost. The law contains so many loopholes and special cases to satisfy small groups that an expert is needed to navigate it. It now takes a considerable effort to find a basic car, that is without AC, power seats, a high performance engine and other things. These are a few examples of areas where it is difficult to find things without 'features' which are of no use to many people.

It seems that nobody can accept the way things are as good enough. Perhaps not enough people realize that at some point you can only improve one aspect at the expense of another?

Learning a New Keyboard Layout

Though I haven't mentioned it, I have been using a Kinesis Countoured keyboard for a couple of weeks. This is all part of my attempt to avoid RSI.

I am not content to just get a fancy keyboard. I am also changing the layout. I am currently trying out one layout which I computed. So far it it frustrating beyond belief, but that is mostly because 10 WPM is significantly slower than I otherwise type. I'm not sure if I will stick with this layout, try a different one, or fall back on DVORAK. Time will tell.

New Feature: Comments

Some people out there have requested the ability to comment on my blog. Well here it is. I have officially added the ability to create comments! It is still early so there may be quirks. If you come across any please do notify me.

Happy commenting.

Epic North American Trip Summary

Well I've been home for two weeks now and have finally gotten the last of the trip stuff squared away. And so without further ado here is the sum of the remaining wisdom and observations I have gained from the trip.

The first things I am going to cover is that Canada is really big. In fact, Canada is so large that our epic trip turned out to be too epic. We had originally planned to travel into the territories. Alas this turned out to be impossible. By the time we had finished travelling through the provinces it was late August. In the North the winter comes early with snowstorms starting in September. I figure that our epic trip which we originally thought would take approximately five months would actually take more like eight months.

Canada is more varied than the United States in some respects. While the United States is more varied in climate it is less varied in culture. In Canada the climate is approximately the same no matter where I went. Sure there are some differences between the forests, prairies and tundra, but not that much. At no time did I feel that the landscape was alien.

The culture has an east-west gradient though. Well, not precisely east-west. With the exception of Ontario the culture becomes more stereotypically Canadian as you move east. Small, close knit communities become more common and you get a sense that good society is the goal more so than individual well-being.

Then there is Ontario. In many respects Ontario is the most American of all the provinces. This is more about feel than anything which is easily mentioned. It is a start to say that the road system resembles the States more than any other province, commuting is a way of life and a number of other factors which I just cannot articulate.

The most beautiful place in Canada is PEI, hands down. It really embodies the best of Canada: the beauty of a green spring, bright sunny summers, abundant locally grown produce and the snow that I love so much.

Unlike the US the wealth of the average person seems more or less consistent across the country. Those in the west tend to have larger houses, but those in the east tend to have larger lots. There are far fewer run down houses and drying towns.

Now there are a few interesting facts about North America that I've learnt on this trip. The first is that no matter where you go you will find somebody has driven there from British Columbia, Ontario, Quebec, California and Florida. It doesn't matter how far from home they are, they will be there and have driven there.

Contrary to popular belief the Quebequois are not all jerks. I'm sure that some of the stories of the abusively rude are true, they cannot be as common as we are led to believe. However, I can confirm that there are people there who do not speak enough English to depend upon. Well, it is either that or the majority have realized the best way to punish Anglophones is not to finger them, but instead to refuse to speak any English. Yes even if this means suffering through their half remembered high school french from ten years ago. Being rude about it just sets people against you.

To some the automobile is the ultimate symbol of freedom. If you subscribe to this view I can assure you that it simply is not true. You are only as free as the amount of gas in your tank. Fuel is the leash to the freedom of your car. After this trip a 4X4 will never have quite the same sense of freedom.

The final great piece of wisdom is related to travelling. We were travelling for nearly six months. We had only a short break in the middle at home. This is too long. The trip started to become a grind after the end of the second month. I strongly recommend that no one travels for more than two months at a stretch. This goes doubly so if you are moving a lot.

In the end this trip cost a fair amount. Of the receipts we got I've totalled the travelling costs up. This is not the complete cost of the trip as the cost of equipment and automotive repairs before we left are not counted. Also there are likely data entry mistakes, lost receipts and illegible receipts. These values should be more or less accurate though. First the USA.

Fuel (including ice): $2340
Attractions (including parking): $913
Food (both restaurants and from the grocery store): $1105
Accommodations: $2130
Miscellaneous (including vehicle repairs and cash withdrawals): $1360



The total for the USA is $7850 in US funds. There is some overlap in that we spent the cash we took out from time to time on item for which we received a receipt. Next up is Canada.

Fuel: $3559
Attractions: $834
Food: $1135
Accommodations: $1984
Miscellaneous: $1266



The sum total is $8779 in Canadian funds. Taking the exchange rate into account the total travelling cost is somewhere in the area of $20, 000.

The trip was well worth it. Though I will never do a trip such as this again I will remember it for the rest of my life. And after all this Courteney is still agreeing to marry me. Who would have thought.

In closing I would like to thank all those who helped us along the way and all the family we saw as we moved across. The trip would have been much more difficult without you. For those who have followed us on our journey I hope you have enjoyed it. Pictures are up in the gallery for the Canadian portion of the trip for those who wish to look.

Now it is back to regularly scheduled life.

KM 47773: Langley, British Columbia

Finally we are home. Stay tuned for a final summary post and a link to the pictures from the Canada portion of our trip.

KM 47219: Blue River, British Columbia

Today I didn't spend all day driving! The first thing we did after getting out of the campsite was to visit the legislative building of Alberta. It is much as you would expect and is similar to the one for BC except that they are a pinky-beige colour.

Then we visited the West Edmonton Mall. Courteney had never been there and wanted to go there to claim that she had been there. We played mini-golf there and I beat Courteney by ten strokes.

Then we drove westward. We should be home tomorrow in the afternoon and that is making both of us happy.

KM 46611: Edmonton, Alberta

Driving a brick against the wind all day is not only annoying, but also not fuel efficient. Getting closer to home though. Edmonton falls prey to the prairie city problem. That is massive sprawl. Worse is the fact that it is sprawling in a circular pattern. This means that it is crisscrossed with highways, all much like the last. It's annoying.

In more touristy news we saw two things of interest today. The first was a Naval Reserve Base in Saskatoon. That's right, a naval base in the prairies. We also saw the largest pysanka (Ukrainian Easter Egg) in the world and I tell you, dear friends, that it is larger than the largest ball of twine in Minnesota.

KM 45858: Wynyard, Saskatchewan

Well, not only did we make it out of Ontario, but we also drove across all of Manitoba. There was also another item which we checked off the list.We ate at a Red Lobster. It isn't a terribly exciting item, but now I can stop wondering about those commercials. We ate at one of the ones in Winnipeg, more seafood at the mid point of Canada and nearly as far from an ocean as you can get and still be in Canada.

KM 45039: Kenora, Ontario

Nothing went wrong today. This is awesome. We continue our way west and are almost out of Ontario.

KM 44302: Geraldton, Ontario

Well, my day sucked. First I had to wait to get my tire patched. Then my truck decides that it wants to stall a lot. This happens most when I back up, which is the one time I cannot go quickly because I cannot see out my back window very well. Then I get a leaking brake line.

In general it is impossible to get a mechanic to do any work when you want it to be done today. They are always busy. Perhaps I should have been a mechanic.

Anyways, I get lucky and manage to find somebody to fix it and it gets done early enough for me to still make some distance. My truck is starting to show its age and it worries me that I still have five thousand kilometres left to go.

KM 43626: Larder Lake, Ontario

Well, I drove a whole bunch and things were going well until I got a flat tire. It was late in the day so I put my spare on and continued on to the campsite. I'll deal with it in the morning.

KM 42803: Roberval, Quebec

The wheels on the truck go
Round 'n' round
Round 'n' round
Round 'n' round
The wheels on the truck go
Round 'n' round
All through the day







The gravel on the road goes
Ting, ting, tonk
Ting, ting, tonk
Ting, ting, tonk
The gravel on the road goes
Ting, ting, tonk
All through the day







The Courteney in the truck goes
Zzzzz, zzzzz, zzzzz
Zzzzz, zzzzz, zzzzz
Zzzzz, zzzzz, zzzzz
The Courteney in the truck goes
Zzzzz, zzzzz, zzzzz
All through the day







The gas into the truck goes
Glug, glug, glug
Glug, glug, glug
Glug, glug, glug
The gas into the truck goes
Glug, glug, glug
All through the day







The radio in the truck goes
Warble, warble, warble
Warble, warble, warble
Warble, warble, warble
The radio in the truck goes
Warble, warble, warble
All through the day







The wheels on the truck go
Round 'n' round
Round 'n' round
Round 'n' round
The wheels on the truck go
Round 'n' round
3, 973, 823 times!







I also saw a chopper with long horns on the front. Awesome.

KM 41820: Middle of Gagnon and Fremont, Quebec

Well, the ferry arrived late. That's unavoidable. None-the-less we made good time and drove the nearly six hundred kilometres of highway to Labrador City over the gravel highway. This highway was often good (capable of 90 KM/h), sometimes poor (70+ KM/h) and only rarely bad (less than 70 KM/h).

We arrived in Labrador City looking for a room for the night as I had promised Courteney that we would stay in a motel or hotel every night in Labrador. There was not a room to be had. Every room in the town and the towns within an hour drive was booked solid. Apparently contractors, likely for the mines, have taken every room.

So since there are no rooms available and the bugs are quite numerous we are in an abandoned gravel pit spending the night in the truck. After this night is over the only type of lodging we would not have tried would be a hostel.

Tomorrow promises to be a high kilometre day as we try to hurry our way out of Quebec.

KM 41150: Cartwright, Newfoundland (Day 4)

I didn't make an entry yesterday because nothing happened. We didn't even leave the room. Having no money in a small town sucks. One can only talk to the guy at the gas station for so long.

Today we got up at a reasonable time and packed up truck. This evening we are finally on the ferry to Happy Valley-Goose Bay. Of course this is yet another bad experience on a ferry on this end of the country. First we need to check in at five in the evening. Then they eventually start loading us. Now we didn't get a berth because there are none left. This means that yet again we are looking to sleep on whatever chairs can be found.

Well, even though we started loading shortly after five thirty the boat didn't leave the dock until after nine. I have no idea whatever could have held up the boat so long, but this means that we will not make it into Goose Bay on time and this will set back our travelling.

KM 41139: Cartwright, Newfoundland (Day 2)

Well, this is the second of four days in Cartwright. We are waiting for a ferry and we have reservations on the Monday night ferry. We slept in late, did laundry and otherwise killed time.

KM 41139: Cartwright, Newfoundland

Throughout this trip we have travelled and acted according to our whims with minimal planning. We don't make reservations and really don't plan more than a couple of days ahead. Today this easy living has caught up with us.

On this trip there is one more ferry to take before we are solidly placed on highways all the way home. Well the ferry we need to take only runs twice a week, Saturday and Monday. Today is Thursday. We arrived in town in the morning and discovered this. We tried to get a reservation for Saturday, but the ferry is full. We now have a reservation for Monday and need to kill time in this small town until then.

So we have time to kill. We spend about three hours today chatting with the attendant at the gas station. This was rather unexpected and nice, but is only a start. Pete is a nice friendly guy though. This will be the hardest few days we've had. Not only is this town of 700 short on cheap activities to do, it is also hundreds of kilometres from anywhere unless you own a boat. It is going to be a long weekend.

KM 40967: Port Hope Simpson, Newfoundland

This morning we awoke at the brilliant time of five AM and I got to watch the sun rise. After getting cleaned up, fed and packed up we drove to the ferry to Labrador. A few hours later we were in Quebec, which is where the ferry lands, and mere minutes away from Labrador.

We decided to stop for lunch and where swarmed by small biting flies, it sucked. It does however remind me of how few bugs there were in Newfoundland. We came across almost none and at no time did we need to retreat or use bug repellent. It was quite nice.

So we've made it to Labrador, where the trees are short, the flies are numerous and the highways are gravel. At least they are well maintained gravel where you can often do 90 KM/h without trouble. Other than that there isn't much to say yet except that the towns tend to be small.

KM 40568: Raleigh, Newfoundland

Today we drove. And saw the only confirmed Norse settlement in North America.

That is we have finally reached L'Anse aux Meadows, the Viking settlement from 1000AD. The archaeological site itself isn't terribly interesting, but like most digs they are carefully covered over for their protection. They are just shaped mounds in the ground which look like the wall plan.

Though that may not be very interesting there is plenty around it which is. The first is the visitor centre which is really a small museum which explains the find, shows a few artifacts from the site and has a few more which relate items which would have been here at the time, but which were too valuable to leave behind. There you can learn such interesting tidbits such as that the Viking's didn't have a compass and navigated mostly by guts and the sun.

On this site a short distance from the dig itself there is a reconstruction of a few of the buildings using the local materials which would have been available. This is staffed by recreationists who are quite knowledgeable. It is also stocked with accurate replicas of all the period items and tools. It is quite nice to see and worth the trip.

The trip itself takes you through a national park, several long stretches of wilderness and many small fishing villages. It is all nice to see and if I had significantly more money and time I think it would be educational to spend a couple of weeks in such a town.

With this tomorrow we need to get up early so we can attempt to travel to Labrador.

KM 40089: Deer Lake, Newfoundland

Today I was disappointed and we will not be going to France on this trip. We awoke early to travel to the ferry which goes to St. Pierre, an island of France. When we arrived we discovered that the schedule requires an overnight stay. Unfortunately as this ferry is a pedestrian ferry it means that we would need to stay in a hotel and eat three meals in restaurants. This is not in the budget.

In fact, as of yesterday we are on our reserve funds, with our primary travelling supply exhausted. Now the reserve funds are more than sufficient to get us home. It merely means that we cannot partake in any expensive activities. The only stop which this will likely affect is Churchill. Churchill is only accessible by train, which is expensive, and this requires a hotel, which is also expensive. Now going up to Churchill without going on a tour is more or less pointless, but this is also expensive. We will look at the costs, but at this point it is seeming unlikely to occur.

Well, after leaving the ferry terminal we took the long way around the southern peninsula we were on to reach the Trans-Canada Highway. Thus began our day of driving. It is easy to see that we covered a significant distance today and will cover much the same amount again today. Truly most of our days from here until we reach home will consist of us driving most of the time. This is alright and I was expecting it, but it will be more or less boring. At least we are heading home.

KM 39304: Frenchcove, Newfoundland

As I was mentioning in my last entry we were staying at a bed and breakfast. Well this morning the breakfast part came about. It wasn't near as bad as I thought it would be, there was one other older couple there having breakfast with us. Breakfast was good consisting of tea/coffee, French toast, biscuits, banana bread, fruit salad and the various condiments. We had a nice conversation and ate breakfast. We were heading to leave and started to chat with the hosts, but we had to cut that short to make our boat tour.

The boat tour was touted to show us Puffins, Courteney's second favourite bird, and whales. Unfortunately there were no whales, but there were over a hundred thousand Puffins and tens of thousands of other birds. It is quite a sight to see a small rocky island full with birds. The birds were nearly shoulder to shoulder among the entire length and height. Courteney enjoyed it and that is all that truly mattered.

After that we headed to a peninsula on the southern end of the island. We are going here in order to see about visiting St. Pierre, a small French island off the coast of Newfoundland. It is quite a long drive with a stretch in the middle of over two hundred kilometres without a gas station. I am thankful that I bought those gas cans to save us from having to use our propane.

We made it as far as Grand Bank, which is fifteen kilometres from the town where the ferry leaves. There we went to see the Provincial Seaman's Museum, which just had a single special exhibit on. This exhibit was a set of blown up lantern slides (precursor to the slide projector) detailing the trip of the first explorer to reach the North Pole. The story they tell isn't terribly interesting, but the slides themselves are coloured. During this time only black and white photographs could be taken. These are basically black and white photographs which have been transfered to glass and then had colours painted on in certain parts. The effect is quite nice and I found it almost better than a colour photograph would have been.

We then had to leave Grand Bank to travel back to a provincial park to camp. The area we are in is rather empty of people. It is rather nice. It is also much as I expect Labrador to look like. The trees are stunted and closely packed where they exist. Where they don't is full of low lying brush and grasses. There are small ponds and streams all over the place and in general it is quite rocky. This is perhaps the first time since we left the desert that I have seen a truly new type of terrain.

KM 38921: Witless Bay, Newfoundland

St. John's is like many older cities in that the roads are rarely straight and there are many odd intersections. We spent the majority of the day circling around St. John's trying to find things and on Signal Hill.

The first thing we tried to find was the legislative building for the province. We could find no strict listing of such a place and after trying a couple of places which seemed like they would be likely we gave up and left it to be found later.

Instead we went up to Signal Hill, just outside of St. John's and overlooking the harbour and city, to see what was there. The first stop on this hill was the Parks Canada visitor centre. In here we saw a bunch of stuff about the history of the hill. The primary use of this hill was to look over the harbour and bay to then alert the city to the incoming ships. This was a military post as at all times a watch for enemy ships was kept.

On our way out of this centre we just caught a traditional military tattoo, that is a musical military exercise. This was cool to see. They had people dressed up and marching like the Newfoundland Regiment of Foot from the late eighteenth century. They also played out a mock battle with the muskets firing just powder. They even had two old mortars and one eight pound cannon. The cannon, when it was set off, produced the biggest smoke ring I had ever seen. It rolled away from the cannon for at least thirty seconds.

Then we went to the top of Signal Hill to the Cabot Tower. The Cabot Tower was finished in 1900 to replace a previous wooden tower that had burnt down and close the temporary tower. It was a nice place and had been used for many jobs in its time. Mostly a signal tower to keep a watch on the water it was also used as a firewatch, radiotelegraph office, soldier ready room and most recently gift shop and small museum. Now when we went in it was a fairly nice day as it was clear but overcast. Less than half an hour later when we left the tower the fog was thick. About an hour or two later it was clear again. That is weather on the ocean for you.

Once we left there we glided down the hill a short ways to the Johnson Geo Centre. This is your average geological activity museum except that it is build underground. Three of the four outer walls of the exhibit area are made of the bedrock. We went here specifically for an oil gallery they have which explains the processes of finding, drilling and processing oil. We saw that and a few other things which mostly focused of Newfoundland and Labrador, as these sorts of things tend to do.

Finally we headed to find the legislative building again. We had found that it was called the Confederation Building when we were at the Parks Canada Visitor Centre on Signal Hill so it was easy to find. It turns out that it is a pair of huge buildings done in as a sixties style skyscraper. That is to say that they are large, square and plain. They are truly the least notable of the legislative buildings of the provinces I've scene (all of them excepting Alberta) except that they do not appear to be the seat of an old and prestigious institution. I consider the style quite unsuitable for such an old settlement as Newfoundland.

With that we said goodbye to St. John's and headed towards our next destination, Witless Bay for a boat tour to see more Puffins. The tour itself leaves out of a town called Bay Bulls about five minutes away, but we were unable to find accommodations there. Instead we are in Witless Bay proper and will need to drive back in the morning. Now camping around here is scarce and we couldn't find a place that fit our needs and because of our rough night last night minimal camping will not do. In the end this means that we have ended up in a bed and breakfast. As far as I know neither of us has ever been in a B&B before. I am left unsure of the etiquette, it's a lot like staying with a relative you haven't seen for many years. You are never quite sure of the rules and norms.

It is certainly an experience.

KM 38841: St. Johns, Newfoundland

Another day of driving and we not only passed through the down of Dildo, but also made it to St. John's. The town of Dildo we visited merely because of its name. St. John's we are in because it's St. John's.

We arrived in town in the late afternoon because we slept in to recoup from our poor sleep the night before on the ferry. This left us little time to do anything. So we found a place to stay. At first I thought that we would be required to find a motel or the like for the night, but we found a campsite in the end which was close. Courteney is having a rough time with the camping and I unfortunately raised and then crushed her hopes of sleeping between a roof and a mattress.

We ended up going out to dinner because it was raining slightly and Courteney was in a bad mood. We ate at some quiet pub on Water street. After dinner we went back to the campsite instead of exploring the supposedly lively bar scene because Courteney was in a bad mood and this had put me in a bad mood. Even though it had rained quite heavily when we were gone and Courteney was fearing the tent would be soaked, but when we arrived back the inside was quite dry.

KM 38465: Gander, Newfoundland

It wasn't the least comfortable sleep I've ever had, but the night on the ferry ranks high on the list. But no matter, we made it. And then we proceeded to drive our butts off. The ferry landed at nine in the morning, local time. As we didn't have anything to put away or anything of that sort and we had had breakfast on the ferry we just drove off the ferry and basically didn't stop. That is how we covered two thirds of the distance to St. John's from the ferry terminal in just one day of lazy driving.

The first thing that you notice when you see Newfoundland is that the coastline is quite rocky, but just out of the water it is green. The entire island is either rock or green. Driving through the country reminds me strongly of BC. It is hilly, green, covered in trees and you see exposed rock face from time to time. Now the hills are not as high and the trees not as tall, but it is still a strong resemblance.

Newfoundland is also the only place on the trip so far where I have noticed a strong, identifiable accent. Everywhere else the accents where not so strong as to make you step back and think carefully about what you just heard to make sense of it. In the end it isn't terrible as there are just two critical things I've noticed. The first is that you need to pay attention at all times. Unlike what I consider normal Newfies seem to head straight into the content with the usual introductions. This means that if you aren't paying explicit attention you will miss something important.

Now I've heard that the best way to make yourself appear to be smarter is to simply talk faster. Now if this is the case then Newfies must seem like the smartest people on the planet. They just motor. Maybe my brain is addled by too much West Coast leisure living, but they go. I don't think I could match the speed without some serious practise. But like all accents you get over it after being exposed a few times.

Shortly we will both reach St. John's, that magical city which has the honour of being the most easterly city in Canada which we will visit. This means that after we leave St. John's we will be heading west and home. Soon we will be coming close instead of going farther.

KM 37817: Inter-Provincial Waters, Canada

As I haven't explained why the ferries to Newfoundland were so backed up I will explain briefly. Firstly one of the ferries had some sort of explosion or fire in its boiler room. This put it out of commission and threw off the schedule. The other ferries pushed through to make due, but were unable for some reason, even with every ferry full to the rafters. Instead when we checked up on our sailing they were seven hours behind. What this meant is that our 6:30 PM sailing didn't load until midnight and didn't leave the docks until 1:30 AM.

Now the ferry we were one felt more like a small cruiseship than a ferry. Firstly most of the space in the ferry was taken up with private cabins, all of which were booked. Secondly, this ferry had a restaurant, not a cafeteria. It also had a health club and a casino. Now this left almost no room for normal seats. This being an overnight sailing (when they leave after dark they slow down and take seven hours to arrive) everybody was cramming into the undersized lounge in order to find a spot to sleep. Courteney and I found a spot, but it certainly wasn't comfortable because the seating is a continuous couch that encircles the deck like a serpent, there was no truly straight section longer than three feet.

So we had this sailing which we had expected to be at 6:30 in the evening. This meant that we had many hours to spend around town before we could line up at the ferry terminal. First we slept in two hours. Then we packed camp up leisurely. After that we went to Canadian Tire to complete my collection of five five gallon jerry cans. Now when my truck is full of fuel I have a range of something like fifteen hundred kilometres. Now half of that is in propane which is impossible to find in some places and so I would like to use that only as a last resort as T may not be able to replenish it. After doing that and filling my truck to the utmost with fuel we went to a public library for five hours. Finally when we could read no more we wandered a mall for an hour before finally just sitting in the mall parking lot with the windows down and the radio on.

On the road killing time can be the most difficult thing, especially since there are no truly comfortable spots to sit and wait. The truck is too hot, some places require visiting a parking meter every few hours, malls are boring, there are no good movies out and the world has a depressing lack of publicly accessible couches. We managed, though just barely.

KM 37744: New Harris, Nova Scotia (Day 2)

Our second day of killing time went rather well considering the amount of time we had to consider what we would do. In the morning we had a fine breakfast (but only because we could leave the dishes to be done later) and then headed off on a boat tour. We went out to a pair of islands known as the Bird Islands. They are just two small rocky islands which are raised out of the ocean. There used to be a lighthouse, keeper and keeper's cattle on these two islands, but now there is only an automated lighthouse and birds. Thousands of birds.

We went specifically because Courteney wanted to see Puffins, which we did in spades. We also saw a great number of bald eagles, several breeds of seagulls, razorbills, a blue heron, a whack of grey seals, a few dolphins and some other birds which I cannot recall the name of. It was quite an enjoyable three hours. And quite affordable as well, something like thirty dollars a ticket. The captain was knowledgeable and our three hour tour didn't turn into many years of comedic antics.

After returning we went back to our campsite for lunch before heading off to the east side of Sydney to see the Marconi National Historic Site. This is the site of the first attempt at a trans-Atlantic radio service. It is quick, but descriptive and has a model of the site as it was during the attempts. The radio signals used longwave and required not only enormous amount of power (there was a 75kW generator on site for this purpose), but also enormous antennas. The model showed an inverted square pyramid nearly two hundred feet on a size and over two hundred feet tall made of hanging copper wires. It would have been quite the site to see in the day.

They also had an amateur radio operator there, but we didn't see what he was up to.

We then returned to our campsite to make dinner. Along the way I picked up two jerry cans and will be picking up at least two more in order to ensure that we have enough fuel to make it across the northern sections of the provinces. Life can be difficult with a gas tank that is half the size it should be.

KM 37583: New Harris, Nova Scotia

This morning we awoke in the basement of Sam Grant Senior's parents' place. They had kindly offered us a floor for the evening and showers in the morning. Sam Junior and Erika were heading home this morning and needed to be out early. We awoke in plenty of time to see them off and after a few minutes of packing and showering we were off as well. The plan was to arrive at the ferry terminal and wait for a ferry to Newfoundland. That was the plan.

What actually happened is that we arrived, waited in line, reached the from of the line and was then told that the ferry was entirely booked up for two and a half days. This was quite a surprise and needless to say we are not in Newfoundland this evening. Instead we bought that ticket and then had to figure out how to spend another two and a half days.

The first thing we did was continue to finish the Cabot Trail. I had been told around the fire that we had missed the best portion by not completing the loop. On the way to doing this we stopped in at the Alexander Graham Bell National Historic Site. This was interesting to see as he did much more than just the telephone. Perhaps the most ahead of its time was the hydrofoil and hydrofoil boats.

We then continued and finished the trail. I believe I was misled because the south eastern quarter of the trail is not anything special at all. The best portion of the trail is really the section which is bookmarked by the gates of the National Park there. We unfortunately were in too much of a rush to stop in any of the side roads, but there are plenty that promise to be excellent.

This took us into the afternoon so we planned to go on a bird watching tour tomorrow and found ourselves a campsite for the next couple nights while we wait for our number to come up at the ferry.

KM 37258: Point Edward, Nova Scotia

On Cape Breton in Nova Scotia there is a relatively famous route called the Cabot Trail. This trail goes up and around the north west side of the island. We spent the majority of the day driving to and along this route. It is quite the spot with many spectacular views. We drove about two thirds of the trail. We went up the west coast from the rest of Nova Scotia and got off to go to Sydney.

The reason we went to Sydney is that a number of the Grants are out on vacation in the area. These are friends of the family so we wanted to stop in and see them before we head to Newfoundland. The real reason for the rush is that two of them are heading back tomorrow morning. We did make it and spent a couple of hours sitting around a fire in their aunt's/sister's backyard with some of their other family. It was good and welcome to see some familiar faces from home.

On our way north on the Cabot Trail we also hit the root of a parade. We arrived about two minutes after the parade started and were three cars from where all the floats were turning onto the route. Thus we got to see most of the parade from a short distance, even though we weren't on the route. After the final float had passed by traffic continued with us in it travelling at the pace of the parade. It was fun at the beginning, but the parade didn't travel smoothly and instead stopped and started. It took us about an hour to reach the end of the route and by the end of it I was tired of stop and go traffic in the hot sun.

KM 36698: Hildon, Nova Scotia

When you put your mind to it one can get a lot of site seeing done in a single day. We started today with the Maritime Museum of the Atlantic. This is a museum which covers maritime history from sailing ships, steamships and more modern diesel ones down to shipwrecks. It isn't too big and not repetitive so it was enjoyable. They have a nice collection of small sea boats, most of them small sailing vessels of the sort used by the common man in the past. There is also an entire steamship sitting at the wharf to be explored. We spent nearly four hours there and enjoyed it quite a bit.

After that we walked around a bit in downtown Halifax in order to find what amounts to the legislative building in this province. They call it Province House here and like the other legislative buildings for the maritime provinces it is rather small. I suppose this is to be expected.

Finally we went to a vegan friendly cafe elsewhere in Halifax for a late lunch. This was a pie stop. We both had a bowl of rice/noodles with ample vegetables and differing sauces. This was besides the nachos. If you ever want a truly good plate of nachos you need look no further than the nearest vegetarian restaurant. I'm not sure why, but they make the best nachos. Anyways, after stuffing ourselves on rather good vegan food we proceeded to the vegan pie.

Now some might not know the amount of animal based food that goes into the average pie: eggs, milk, lard, cream and any number of other things. Of course vegans refuse to eat anything of the sort so I was unsure of what a vegan pie would taste like. On today was the cocobanana pie. It had banana and coconut in it. It was rather good. The crust was what I was most interested in and it was alright. It wasn't flaky at all, instead it was crunchy. It also wasn't as dry as I had expected. It was alright.

We are making progress towards exiting Nova Scotia. All we have left on our list to do here is to see some of Cape Breton. I'm not sure why, but people keep on asking if we are going to go there or not. We'll find out soon enough.

KM 36568: Sackville, Nova Scotia

At this point Courteney and I are getting tired of travelling and site seeing. The greatest evidence is that as we go along the time we spend at each site gets shorter and shorter. Also some things which made the list we decide are not worth our time. An example of the latter happened today. We originally had a blacksmith shop on our list. However, when we arrived at the town where it was supposed to be we had some small amount of trouble finding it. Instead of putting a lot of effort into looking we both easily agreed that it wasn't worth it and that we could move on, which we did. Long term travelling is quite hard and is taking its toll on both of us.

In fact, in some ways I am beginning to consider myself a professional tourist. I have perfected the art of being quick at service counters, reading attraction maps and generally getting around without spending too much money and missing anything. I'm even getting angry at the lesser amateur tourists. This happened when we visited Peggy's Cove today. Peggy's Cove is home to a picturesque lighthouse and a small fishing village of about two hundred people. The main street it narrow, winding and lined with boulders the size of large cars. It is also an immensely popular tourist attraction. When we arrived there were easily five hundred tourists wandering around the town, blocking traffic and generally being a nuisance. I feel sorry for those who live in Peggy's Cove and could never live in a similar situation.

We saw the lighthouse and got some postcards and lobster flavoured potato chips. The latter aren't that bad, as long as you don't mind a bit of a fishy taste.

Then we moved onto Halifax and the first stop was the Alexander Keith's brewery tour. This theatrical tour is much as one would expect: introduction by a costumed lady, movie outlining the history, pretences of meeting Mr. Keith, beer tasting and entertainment while drinking said beer. Somewhere in their we even learnt how beer is made. It was fun and the hour went quickly.

KM 36249: Shelburne, Nova Scotia

I have lost the patience to waste a day away. Travelling and constantly having something to do or something to see or somewhere to go has done this to me. This morning we went first thing to find a mechanic to do my brakes to avoid the disappointment of yesterday. We ended up finding a slot at the Canadian Tire in town, but not until after three in the afternoon. This was at quarter after nine in the morning. So we had most of a day to kill.

We had arrived in this town from our drive yesterday after not getting our brakes fixed because we needed to be here anyways to see the world's smallest drawbridge. Now most tourist attractions, no matter how small and inconsequential, tend to be clearly marked. The rest which are not clearly marked generally have at least one sign off the nearest major highway pointing the way. The drawbridge had no signs. All we knew is that it is in a small town called Sandford just outside of Yarmouth, which is the large town of which Arcadia is just outside of. Well, not knowing where it is, but Sandford being a small town we thought we would just drive through Sandford and see it. Well we drove through it, and the next town over and one more town over. The highways around that spot are not terribly well connected. So we make another look of the town and try two side roads as we go through town. On the third side road after travelling down it quite a bit we end up at a wharf. As part of this wharf and associated breakwater there is a pedestrian bridge about twenty feet long that parts in the middle. It was a draw bridge. No signs, no plaque, nothing but the world's smallest draw bridge.

So we slowed down as we drove by to see it. This all told took about half an hour. Having no further plans for the day and plenty of time to spend we headed back into Yarmouth to find the local tourist information centre as they tend to be good sources of information on where to kill a few hours that isn't a mall or movie theatre. This time not so much. Not unless we wanted to go through some of the three or four small museums. What we did find was an old lighthouse that was opened to the public with a small museum in it. This was a bit of a drive on narrow, winding roads through an old fishing village on a peninsula.

It was much as you would expect. There was a small cafe where we had lunch because it was raining fairly hard. We both had some fair tea, a bowl of fish chowder and some bread pudding. The pudding was delicious. Afterwards we went for a short walk around. Courteney's hair was blowing in the sea gale. It was also foggy all day so we could only see about a hundred feet into the ocean before it became greyed out.

After we finished there it was still only two. So we headed back into town to wait it out. We ended up going into a dollar store in the hopes of finding some cheap citronella candles and candle holders. We did find a few things. After that we went back to the truck where I listened to a couple of podcasts and Courteney worked more on her latest needlework project, a stuffed dragon.

Finally the appointed hour came and we could drop the truck off. We did so and then proceeded to wander the Canadian Tire. We did so for a while and commented to each other on various things. We paid special attention to the citronella lamps as we were going to try them. After wandering for an hour we returned to find the truck finished. So we got the keys, bought the lamp and lamp oil and moved to the truck to make some distance before it was time to make camp.

We did make about a hundred kilometres. When we stopped the first thing we did was setup the lamp and candles in an attempt to keep Courteney from getting bit. I've often heard people claim that they don't work at all and I'd like to disagree. They do work, especially the lamp with a large flame, but not over near the distance one might hope. I found it better at up to three or four feet from the flame. That isn't quite the backyard protecting distance, but is better than nothing. An important thing I noticed is that as long as I stayed within the protection I would be fine, but if I left and came back mosquitoes would follow me in. So the most effective use of them seems to require that there be a lamp or candle every few feet and that all areas of travel be covered.

Hopefully they help enough to make Courteney stop being miserable.

KM 36058: Arcadia, Nova Scotia

Today was a day of disappointments. First we drove over two hours through the scorching heat in my air-conditionless truck to see the world's heaviest lion in Aylesford. Unfortunately the lion died sometime in February when we were still making up our list. We instead walked around the zoo and saw a few other animals. It was at least nice to get outside in the sun, but the sun was so strong that without our hats I fear we both would have fallen to heat stroke.

After the zoo we drove, again through the hot sun, in search of somebody to put new brake pads on my truck. They are due to a change, but I am unable to do them myself because I don't have a few necessary things and I don't really have room to want to haul them home. Alas we found nobody who could do a while-you-wait job. So we need to travel one more day with squeaky brakes.

Then after that was done and we were to travel towards our next destination not only did we travel through more bright sunshine, but on our way there we entered heavy fog. So not only did we suffer most of the day with sweating ourselves into puddles we didn't even get to enjoy a nice warm evening because it got cold! Talk about disappointing. Especially since when the sun is blocked or has gone away out come the mosquitoes.

Hopefully tomorrow goes better.

KM 35544: Pictou, Nova Scotia

Well, we have finished visiting PEI. All that we really had left was to see the legislative buildings. We did that, but only after Courteney bought a purse and a silly hat. We also saw the latter two thirds of a free musical show about the history and important elements of PEI history. Only then did we take a look at the legislative building. It is rather small, but then so is the province. It also served as the place where the Fathers of Confederation set about creating the Dominion to protect the British colonies from American influence.

Other than that we took a ferry over to Nova Scotia. We started quite late and so didn't get out of our campsite until after eleven. We spent a couple of hours in downtown Charlottetown and so didn't have time to do anything after the ferry docked. Thus there is little for me to report.

KM 35456: Cornwall, Prince Edward Island

The great advantage of PEI being small is that you can get a lot done in a single day because you never spend that much time travelling between destinations. Take today as an example. First we went to Cavendish to look at some Anne of Green Gables stuff. Specifically we went to the farm which served as the model for Green Gables and has been restored to represent what it would have looked like at the time the book was set. It was a thing to see.

Then we went to Charlottetown to buy tickets to the showing this evening of Anne of Green Gables, The Musical. After that we headed to the edge of town to visit the Cows Factory. For those who don't know Cows is an ice cream shop which makes excellent ice cream; it has even been rated as the world's best ice cream by some magazines. Then we went to a small town a short distance out of Charlottetown to find a campsite and have dinner before going to see the show. That is a lot of stuff to do in a single day.

At Cavendish in the Green Gables National Historic Site we went around and saw some scenes which are famous from the book. The barn has been restored, the house setup much as it is in the book, the Haunted Woods are there as is Lovers' Lane. Now I've never read the books, but this is what I am told. In the gift shop we found a couple things of note. The first is chocolate covered potato chips. They are delicious, though likely not healthy in the slightest. The second is Raspberry Cordial, a drink which is apparently a favourite of Anne. We bought four bottles and have drank one, it is not half bad.

The next stop of note is the Cows Factory. This is really a factory with a storefront. We took the factory tour and saw how they make their shirts (to some they are almost as well known for their shirts full of puns as their ice cream), ice cream and their cheese. I didn't even know that they made cheese, but I suppose that is because they don't sell that at their Whistler store. It was a nice tour and everybody got a free sample of ice cream at the end. Of course that isn't enough and we both got ourself a cone before we left. I got a Don Cherry in a waffle cone coated with chocolate and sprinkles while Courteney had something with pineapple and mango in it.

Then we went and got a campsite for the night. Just as we were rolling in it started to rain a small amount and in the nearby bay there were a few lightning strikes. It promised to be a fun night, but passed by quickly and caused us no trouble.

Finally we went back into Charlottetown to watch the show. This musical has been running for the past forty-five years. That is a pretty long running show. It was rather funny times and quite good. I also know what all the short actors who don't make it in movies end up doing, playing children in musicals. The effect was quite good. We both quite enjoyed it. I don't think it'll ever stop playing, so if you find yourself in the area you might choose to watch it.

KM 35275: Mill River, Prince Edward Island

Today we drove to PEI. It used to be that you couldn't do this and instead needed to take a ferry. Well several years a long bridge was built called the Confederation Bridge from New Brunswick to PEI. That is what we drove on. I had heard that it had eight foot walls on either side which prevented any sort of view. I am glad to say that it isn't true. It does have solid concrete walls, but they are only three or four feet high and out of a truck you can see over them easily. I was the entire island of PEI from end to end on our way in.

Upon arriving you enter a town called Gateway Village. It is named for an obvious reason and is really only a tourist place.

Our first major stop, excluding a gas station and a grocery store, was the town of O'Leary on the western side of the island. We went to this small town to visit the Prince Edward Island Potato Museum. When we arrived we found a Potato Blossom Festival in progress and had missed the parade by a handful of minutes. This meant that we got to see a number of the parade floats and the like dispersing. There were a number of old tractors, a poor person in a potato costume roasting under the strong sun and a bunch of other things.

After a while we eventually got past the traffic caused by the ending of the parade and reached the museum at about one in the afternoon. There are some museums which you are surprised that they exist and find them a bit odd. This was one of them when I added it to the list of places to see oh so many months ago. On going through it, it is entirely reasonable. Not only is it the single largest crop on this island, but the potato is the fourth largest crop in the world and is highly nutritional. The potato is native to South America, but the Eastern Europeans eat the most of them. Generally the potato only became a popular crop when famine hit. It turns out that potatoes can grow just about anywhere.

I also found it surprising that so many ailments afflict the lowly potato. From the Colorado Potato Bug which has been spread by people across the world to the Late Blight that devastated the Irish. After going through the museum it seems surprising that any potatoes make it to our table in the end.

Unfortunately the potato museum was the only stop in PEI that doesn't involve Anne of Green Gables in some way. PEI truly only has two main exports: potatoes and Anne.

KM 34938: Miramachi, New Brunswick (Day 3)

There are many aquariums in North America and we have visited three of them. Today we visited the third. This was the Aquarium and Marine Centre of New Brunswick in Shippagan. Most aquariums only really have tropical and otherwise exotic species. I believe this is because they are nicer to look at and harder to come by. This aquarium is different. It is filled with species from the ocean surrounding and rivers contained in New Brunswick. It is nice, for a change, to sea fish that you can actually find in Canada.

Of note I saw my first whole Atlantic Cod, a live lobster that must have weighed twenty-five pounds, an albino lobster and a blue lobster. This is among other local species like Lake Sturgeon and Haddock. It was unfortunately raining lightly with a strong wind so we didn't spend much time watching the seals, even though Courteney likes them.

After returning back and watching TV channels go off and on the air (likely because they are sent out by satellite from Toronto and Toronto was having severe thunderstorms) we had a nice dinner. Courteney's Grandfather had us sampling one of his strong and young blueberry wines. It'll be good in a few more months, but was a bit rough yet. Finally after all that one of his close friends came over to see us for an hour. I can't quite remember her name, but I think it was something similar it Gracie.

This was our last day in New Brunswick and tomorrow we are destined for P.E.I., the land of potatoes, sandstone and Anne of Green Gables.

KM 34716: Miramachi, New Brunswick (Day 2)

Since we have entered Ontario we haven't really had much sun. It seems that this part of the country hasn't had much in the way of a summer. This morning was alright so we went on a short tour of the town. It is a nice small town.

We also went to a small island called Midling Island. This island was house to a quarantine centre for the Irish immigrants who reached this side of the Atlantic sick. We had a nice lunch there and then made our way home.

Once there we let lunch settle and then I helped cut the front lawn. This is the first time that my offers to do work of some sort have been accepted. Oh well.

Right now we are just relaxing, watching the news and awaiting the rain that is supposed to arrive at around dinner time.

KM 34716: Miramachi, New Brunswick

After a loud night of rain and wind amplified by the tarp covering our tent we awoke to a relatively sunny day this morning. We made a breakfast as best we could with our limited supplies. We have been staying at peoples' houses for such a long time that we have run out of certain foods, such as pancake batter, and others have gone bad, such as our milk. This means we don't have a whole lot to work with. We made due though and had some bachelor's egg and toast.

There were two tourist stops on the agenda today. The first was Magnetic Hill. This is a hill where you roll uphill. It is quick, but actually quite a neat effect. When I return I will post the video I took.

The second stop was the tidal bore in Moncton. A tidal bore is a tidal wave which moves inland in a river and raises the level. The wave we saw was only perhaps six inches high, but it did move upriver with good speed and did raise both the rate of flow and the water level. As the tides change so does the height of the wave. Construction on the river has decreased the height of the wave since the mid sixties, but there are longterm plans to fix this and bring back the multifoot tidal bore.

Both of these attractions didn't take long to see so we made our way to Miramashi where Courteney's Grandfather resides. We made it here quite early and he was not yet back from fishing so we went to the local library to read some of the magazines. Libraries are an often forgotten way of passing time while travelling. Only rarely is a library card required to read inside the library and they tend to have a number of periodicals that are up to date. We spent two and a half hours there and it was quite good.

After that we went back to Courteney's Grandfather's place and he was there. We proceeded to chat for a while, eventually had dinner and then chatted again. We really have no plans but to spend a bit of time here so seeing what we do tomorrow will be interesting.

KM 34510: Parlee Beach, New Brunswick

We are back on the road again for another day or two on our way to Courteney Grandfather's. Which is just fine because the weather was poor this morning. We woke to a light rain and a good amount of fog. After packing the truck up and saying our goodbyes we headed east. Our first destination of the day was Hopewell Cape, where the Hopewell Rocks are to be found.

These rocks are more or less pillars which have been eroded out of the cliff by the tides. The tides at that point are forty-seven feet or thereabouts. The difference in height is so great that even with the tide going out for half an hour it went down perhaps four feet. At the Hopewell Rocks are some which are called the Flower Pot Rocks. These are true pillars which widen at the top with trees and grass growing on top.

It is even possible, when the tide is low, to walk beneath these pillars in the mud. Unfortunately the timing of the tides didn't work to our advantage at all. We arrived shortly after high tide at about one in the afternoon. We spent perhaps an hour and a half walking the paths and gazing at the Bay of Fundy, but when we left it was still nearly two hours until the tides where low enough to walk.

Instead of waiting we continued onto Moncton. Though we have two things to see in and around this city we saw none of it. Instead we got pelted by heavy rain and got an oil change. Nothing spectacular, but not everything about travelling can be exciting.

When we arrived at the campsite for the night it was raining cats and dogs while we sought a dry spot to pith the tent. Luckily the rain broke for a couple of hours shortly thereafter so we were able to set the tent up and eat dinner while staying relatively dry. Of course afterwards it started to rain quite heavily again before ceasing for a time. We brought two tarps with us to deal with heavy rain. However, one is so large and we are able to set the tent out upon it and then fold the other half of the tarp over the tent completely. We are kept quite dry when we do this no matter how much it rains as long as we roll the bottom bit up to prevent a puddle from forming underneath us.

KM 34190: Saint George, New Brunswick (Day 4)

Oh how the time flies. Today was spent visiting a friend of Courteney's mother, Ruthie, on the other side of St. John. We chatted there from about eleven until one thirty before heading back here. It was nice to meet her and Courteney seemed to enjoy talking to her. After we returned Courteney proceeded to bake another pie. This time it was strawberry-rhubarb.

While she was doing that I was fiddling to fix Maynard's Internet connection and burning a couple of DVD's of pictures to mail home as a backup. Network access has been much rarer at campsites here in Canada than in the USA. This isn't a problem at all except that it means that I have been unable to copy my pictures back home for safekeeping. So I will make use of the postal system instead.

After that was done and with some help from Courteney while I was burning stuff we got Maynard's Internet to not only work, but work wirelessly. This has been causing him undue trouble for the past two weeks.

In the evening there was another family dinner, this time a stir fry and swish kabobs. I truly think I've misspelled that. This dinner was good and everybody had a good time.

This ends our stay in St. George and the St. John region. Tomorrow we are up early to make our way north east to Moncton.

KM 34000: Saint George, New Brunswick (Day 3)

I made my entry too soon last night. After I made my entry we ended up going to one of Gloria's child's house. He is of course an adult. They were having dinner so we chatted for a couple of hours. During this is was noted that there were fireworks last night. So we went back to Maynard's house. Shortly thereafter the fireworks were to start so we were whisked off there.

It seemed like the whole town showed up and the fireworks were quite good. Part of it was that we were quite close and could feel every explosion. But even the number, variety and combinations of fireworks was quite good. They lasted perhaps twenty minutes or half an hour before we made to leave. Of course as with anything like this there was a short traffic jam.

Today we did three things. First we went back into St. John to see that Carleton Martello Tower. This is a tower that was first built for the War of 1812, but was then used to fend off some Fenians, hold misbehaving soldiers during WWI before they shipped out and then as a fire control centre during WWII for the defence of the harbour. It is a nice tower.

After that we went to a salt water beach called New River. It was nice and sunny and we spent a couple of hours there. The water is from the Bay of Fundy which is connected to the Atlantic Ocean. The water was very cold, but it is the North Atlantic. The day was good and we flew a kite for a short while on the sea breeze.

I could have stayed the rest of the day, but we needed to return because Courteney was to make a pie for dinner that night. She did and made a nice blue berry pie. Dinner was back at Gloria's son's. Dinner was burgers and salad an corn on the cob. We also sampled a number of homemade fruit wines which were quite good. All in attendance raved at Courteney's pie. A number of them complained because they had a Weightwatchers weigh in tomorrow.

And that was our day. It was quite nice. Before I go tonight I am going to express my thoughts on if we are going to get to travel much of the North or not. The first thing to know is that the North is cold and consequently snow and ice comes early in the year. This really means that if we haven't entered the North by the beginning of the third week of August we likely shouldn't because we are likely to hit cold weather. We just aren't equipped for cold weather. Courteney has clothes only good down to about zero and I only have things good enough for about ten below on me. With the current time of year, how long it took us to get here, how much more east we have to go and taking into account just how immense the North is I currently do not think that we'll be able to make it during this trip. I'm not happy about this, but I do not have the money to outfit ourselves. I still hold hope, but I am getting ready to accept it.

KM 33858: Saint George, New Brunswick (Day 2)

Today was another day seeing Saint John. The first thing we saw where the Reversing Falls at high tide. The Reversing Falls is a section of the Saint John river which runs backwards when the tide is high or coming in because the tides are so high. After we saw that we went to go through the New Brunswick Museum. This is something Courteney wanted to see. It had a variety of things in it including the art by Canadian Artists, a hall of whales, a portion on the history of industry in New Brunswick and a few other things. The history of industry was particularly interesting because I haven't seen the changes of industry through time laid out before.

We wandered around the museum for not quite four hours and saw most of it. After exploring the museum and getting Courteney away from a stuffed puffin we made our way. Back to the Reversing Falls. It was by now low tide and not only was much more beach and rock above water, but the falls where travelling in the opposite direction. This caused a number of large whirlpools which we watched.

After spending a few minutes looking at the falls, which are more just reversing rapids at this point, we headed back to St. George for dinner.

Now normally I don't mention what we have for lunch unless we ate at some particular restaurant. Today I am making an exception because we had something special. For lunch we had lobster sandwiches. It is one of those odd lunches. Though I am told it used to be only the poorest who ate lobster, I am sure it is now only those who are well off. It was good and surely something different.

KM 33707: Saint George, New Brunswick

For those who don't know St. Stephen is where Ganong, the chocolate company, makes their home. In this town there is not only a Chocolate Festival(in August so we don't get to see it) but also a chocolate museum. It is for this museum that we came to this town. We did visit the chocolate museum. It isn't an enormous museum, but does have good information on the history of chocolate, how chocolates were and are made as well as a bunch of stuff about the Ganong company. There are also free samples placed throughout. It was a pleasant way to spend the morning.

After the museum we visited the original chocolatier shop. It is still selling chocolate and we picked up a few things. We had half the box of chocolates after dinner.

With that we left St. Stephen and headed East. It was still early so we headed to Saint John. This is the city where Courteney was born and lived for several years. Consequently we have a number of things to see. On our first afternoon there we saw the City Market and King's Square. King's Square is a nice little park with a few monuments and the like. The City Market is a covered open air market similar to the one on Granville Island. The ceiling is made to look like the hull of a ship upside down. We walked through that and thought we saw lobster for three dollars a pound. Alas upon further inspection it was crab which was three dollars a pound and not lobster.

After seeing those couple of sights we headed to St. George to stay at a couple of Courteney's family. These two, which go by the name Gloria and Maynard, live in St. George which is about half an hour out of Saint John. We got there at about three thirty. We chatted for a while and then went out to get some lobster from a local fish market. It ended up being not quite eight dollars a pound. Then it was back to their house for a lobster feast.

I've never had a full lobster before, in the West it is just too expensive. So I needed to be shown how to eat one. Conveniently I had two old Maritimers to show me. I quite enjoyed it, but it is no means a tidy meal.

KM 33405: Saint Stephen, New Brunswick

I must reiterate, New Brunswick is small. Instead of only being able to make a single stop in a day we made nearly two. First we drove a short distance to Fredericton. In Fredericton we had three things to see. The first was the Beaverbrook Art Gallery. This is a medium sized gallery founded in the 1950's. In this we saw a number of things. The first two were some temporary exhibits of native artists. In one of them there was the oldest birch bark canoe in the world, built in 1825. It had spent 180 years in Ireland and was only recently put on temporary exhibit in Canada and most recently has been repatriated.

Elsewhere in the gallery there are a number of pieces from painting masters and a number of items from the medieval and Renaissance periods. This includes one tapestry which is perhaps twenty feet square and is one of three surviving of the original set of twelve. They used to decorate the dining room at the French Palace, but this one was found in an old chest. It depicts a hunting scene from the time of the Holy Roman Empire perhaps five hundred years ago. There is also a number of pieces of furniture that were interesting to see and were well built.

After going through that museum we walked to the truck for lunch; we had parked it in a public parking lot near the city hall. After a fine lunch of moldy cheese and a peanut butter and jam sandwich without the bread we proceeded to walk to see the Garrison district. The Garrison district is the historic district. We walked around there a bit, but it wasted as interesting as I had thought it would be. It may have been our experience in historic settings and the fact that being a Friday not everything was running.

Instead of spending much time there we took a walk along the river. We picked up two ice cream cones and walked for a way before returning. It was nice, though the day was rather grey. At least it didn't rain. Unfortunately because there wasn't near enough sun we couldn't make use of the sun dial on the side of one of the old buildings.

Leaving Fredericton we went to St. Stephen. This is just a small town, but has a few stops which I'll describe tomorrow after I see them. Instead I'll describe the things we did in St. Stephen since we arrived too late to do anything but set down.

Firstly we took a brief tour of the city looking for propane. Auto-propane is sometimes hard to find and even though New Brunswick is small I do need to fill up eventually. We did find it, but at $1.11 per litre. We are running on gasoline and I haven't decided if I am going to fill up at such a costly price. We had also planned on camping tonight. Alas there appears to be only one campsite near this town and that is closed. I am not sure what could cause a campsite to become closed, the only real costs are the land and labour.

With our lodging plans for the evening being changed we needed to find a motel. Conveniently this town is small enough to have the single visitor centre easily found. We stopped there and had a nice chat with one of the ladies there about the available motels and other attractions in the area. We eventually decided on a motel somewhat outside of town. When we arrived the price was fine as motels go, but no motel will ever beat camping. This motel was first built in the fifties and is the nicest non-chain motel we have stayed in yet.

Now motels mean we cannot cook our own food. Instead dinner was to be found at a nearby diner. Courteney chose a lobster roll, basically a sandwich. I saw a seventeen dollar seafood platter. I like seafood and when in the Maritimes seafood is everywhere. Well, that platter almost killed me, it was all that I could do to finish it. I won't make that mistake again. Seafood is cheap here. Tomorrow I'll see about finding my way to a fishing dock to pick up some lobster for dinner. Apparently they can be had there for about five dollars a pound. That should be fun. But since my dinner was so large I could only watch while Courteney had a nice chocolate cream pie.

KM 33000: Kings Clear, New Brunswick

Today we left the first of our language ordeals and entered New Brunswick. We made it through our short trip of French surrounded by people who can manage broken English when necessary. On our way back we will spend a much greater amount of time in Quebec and northern, small town Quebec at that.

Also, New Brunswick is quite small. We drove from the north-western corner to nearly the centre in a day and still managed to see the world's longest covered bridge in Heartland and the world's largest axe. The covered bridge crosses the St. John river and heads into a small town. It is well over a thousand feet long and takes a noticeable time to drive across. It takes perhaps five or ten minutes to walk across.

As a single lane bridge it isn't necessarily the most practical, but since it has been there for more than a hundred years I don't see this town getting another bridge or getting rid of this one. In the visitor centre nearby we bought postcards and Courteney found a large plush lobster and found it so irresistible that she bought it. I'm not sure what she is going to do with it, but she has it.

After the bridge we drove a ways and arrived at a town with an axe so large that it is hard to describe. It was a double headed axe nearly to scale that had an axe head perhaps fifteen feet wide and two feet thick.

We will be in this province for about a week, even though it is so small because Courteney has family to see and a number of sites she wishes to show me. It may be a nice change not to need to drive so much every day.

KM 32912: Riviere-du-Loup, Quebec

Quebec City is old. This would seem obvious from the fact that it recently celebrated its four hundredth year. However other cities through which we have travelled which are well over two hundred years old don't feel near as old. I believe that it is due to the French pattern of three story buildings made of stone which makes it feel as old as it does. Firstly the buildings are large enough and well constructed enough that tearing them down to put up a new building isn't worth it. Being well constructed also reduces the likelihood of them falling down. Then there is the stone. You just don't see stone construction often in North America. Lots of brick it is true, but brick shows its age much quicker.

In any case we visited Quebec City today. Specifically we stuck to Old Quebec, which is the nicer spot anyways. It is full of, as mentioned above, three story stone buildings. It is nice. I had been there once before during a Coop term when I was flown out. I didn't see much then though. We had a few things to see while we were in the city and as we arrived near noon lunch was the first on our agenda.

In Quebec City there is a restaurant called Restaurant aux anciens Canadiennes. This is is a pie stop. So we stopped in and both had the daily special of a bison meat pie. It was quite good. Now the last time I had been in Quebec City I had a beer which was the darkest stout I have ever seen. It is called Boreale Noire and it is black. For those who know it is even darker than the Black Plague. Being back in town I spoiled myself to a bottle over lunch. After the meal it was onto the reason for eating there, the Maple Syrup Pie. It was quite good, but not near as sweet as I had feared. It was quite tasty.

After lunch we went on a walk around the Old City. Our first stop was the wall. Quebec City is not only the oldest, but if I am remembering correctly the only remaining walled city in North America. It is a stone wall perhaps fifteen or twenty feet tall.

After this we went to look at the Citadel. We saw a bit of it, but not all of it because it is still an active military base and home to the French-Canadian regiment. We could have taken a guided tour, but Courteney didn't feel up to it. That is certainly a place to be posted. The Citadel is on one end of the Plains of Abraham, where the historic battle which caused New France to become a colony of the British occurred.

Finally we walked just outside the walls of the city to reach the parliament buildings of Quebec. They take a bit of liberty in using national terms, but I suppose that is what many consider themselves. It is of an entirely different style than the legislative buildings which I have seen to this point and is really just an enormous stone rectangle with a large stairway to the front doors. It is nice, but rather uniformly grey.

That pretty much covers the sites we wanted to see in Quebec. We will likely travel back through Quebec on our way to the northern portions of the provinces, but have no listed stops. We are currently near the New Brunswick border and will begin exploring that province tomorrow.

KM 32506: Ste. Madeleine, Quebec

I have often wished that I could speak French fluently. Alas, due to a lack of practice I cannot. Although I took French for the majority of my elementary and secondary schooling. I believe that I got rather reasonable and conjugation and forming sentences. However, I was always slow and my vocabulary is as small as you would expect. Even though I know a few people who speak French I feel too bad to make them endure conversations with me speaking at a two year old level. But being here in Quebec and having to deal is bringing back small snippets of highschool French.

So we are both making our ways with what little French we remember from school, the cereal box French we know by heart and what our little phrase book helps us out with. When we arrived here at the campsite we met a nice man across the fence who decided to talk to us. We know but a little French and he knows but a little English, but we made due and managed to have a slow and stilted conversation.

Earlier today we were in Montreal. We didn't have any particular stop so we walked around the downtown for a couple of hours. We had lunch and took a look at how expensive Just For Laughs tickets were. The lunch was good, the tickets about eighty or ninety dollars a person per show. This meant that we couldn't see a show, but perhaps next year. The downtown was quite nice and I think that if I knew French I could live there with relative ease. This means that there are two places I could go which are big for my industry: Vancouver and Montreal. The third, Toronto, didn't appeal to me at all.

KM 32264: Ottawa, Ontario (Day 6)

The Canadian Museum of Civilization is deceptively large. Enormous I would call it. We spent the entire day there and only finished seeing perhaps two thirds of what is there on display. We had fun and I got to enter another new province, Quebec.

This museum has a number of neat things and displays including the Canadian Postal Museum and a linear history of booming industry and life in the various Canadian Provinces and Territories. The history itself moved east to west and yet was mostly linear in time. It is a bit funny to see the development of Canada happen moving to the west in a progression over time. Especially since most of the eastern provinces had plateaued before a more westernly province really took off.

There is also an exhibit of Egyptian artifacts which Courteney quite enjoyed. These are the things on permanent display that we saw. We didn't have time for the permanent displays on the Natives of Canada or the historic people of Canada or the Children's museum. There were also a few temporary exhibits which we saw. The first was an exhibit on mythical monsters. These included dragons, unicorns, sasquatch and the like. It was pretty nice and covered a number of creatures from countries and continents which I had never known, such as Mexico and Australia.

The other temporary exhibit we saw was the Royal Stamp Collection. It was interesting to see and showed mostly the very old stamps from when postage stamps where the new thing. It is a bit odd to think that postage stamps haven't existed forever, postage hasn't always been cheap and it used to be the receiver who paid.

We finished looking around at about five in the evening and were both exhausted. Stacy came to my Grandparents' after work and said goodbye, which was nice. It impossible for us to expect others to rearrange their life when we arrive because we can give no notice, but she put the effort in anyways. They all did really in that they mostly showed up for dinner on Saturday.

This ends our stay in Ottawa. We have seen all the family in this part of the country and have seen all the sites we desire to see. Tomorrow morning we are heading back into Quebec to the city of Montreal. It should be an interesting thing to see, I just hope that the French doesn't trip me up too much. It should be alright in the parts we are going this time, but on the way back we are planning on heading through the more northern parts of Quebec where English is a foreign language.

KM 32240: Ottawa, Ontario (Day 5)

Cheese curds and sunshine where the big things that happened today. We woke up and moved ourselves to the Dugay's again. Once there my Uncle Jim was kind enough to drive us to St. Albert's. St. Albert's is a place with an awesome cheese factory which packages and sells fresh cheese curds. These are also known as squeaky cheese because they squeak when they are fresh. One the way back we picked up a sandwich each and returned to their house to eat and have a couple of beers in the sun. It was quite nice.

We had come prepared to take a swim in their pool, but it was too cold because they've have rather cool weather the past few weeks. They even fed us dinner which was also nice as we could talk more than we did the previous night as we bounced around a bit trying to see everybody.

KM 32179: Ottawa, Ontario (Day 4)

It is a nice change to be woken by the sound of thunder and being able to roll over and go back to sleep. In a tent we don't have that luxury. Today was a wet one. The weather rotated between overcast, raining lightly and pouring cats and dogs. It did this all day, even after we had left the cottage. This unfortunately means that we were mostly stuck inside the cottage until we left in the early afternoon watching satellite TV. Such is life however.

After returning to the city there was a family dinner at the Dugay's (the family of one of my mother's sisters). It was good as it meant that I got to meet several family members who I would have had to seek out otherwise. As you might imagine with people's work schedule that would be difficult. It was nice to see everybody. I've now met all the new children of the family and I believe that I am the first in my immediate family to do so.

I even got Grandpa to drive out to his daughter's place. This is a trip which people apparently find it difficult to get him to do. Now this also meant that I couldn't stay quite as long as I would have liked, but that is alright. We are going back there tomorrow in order for us to get some cheese curds, which are only good when they are fresh and squeaky.

KM 32179: Griffith, Ontario

The Canada Science and Technology Museum is similar to Science World in Vancouver. Both of us enjoy playing with science toys so we spent the morning and half the afternoon visiting it and having fun. I had been there once before perhaps ten or fifteen years previous when I came out to visit my grandfather. We had plenty of fun watching the electricity demo, though unfortunately Courteney didn't get up to try the Vandegraf Generator. One day I'll get a picture of her using one.

Of course most of the exhibits have been changed from when I was there. This time around they had an exhibit about Canadian Inventions (bug repellent!) and a photographer by the name of Karsh. Both where rather interesting. I especially liked the radio and telegraph exhibit.

After leaving the museum we went back to my grandparents' place and my Aunt Katie and her husband Ken where there. It was good to see them and we talked for about an hour before they had to leave.

After this Stacy showed up to take us to the Cooper Cottage where Amanda and Cory are staying. There we had a few beers and chatted by the fire. I wish we could have had more fires during our trip, but it just hasn't been practical. So that is what we did.

KM 32165: Ottawa, Ontario (Day 2)

As I was saying before my grandparents live just minutes from downtown Ottawa so today we spent in centre town, as they call it. First we went to the Parliament buildings. When we arrived we caught the tail end of the changing of the guard. We saw them march down the lawn and then off on the street. There were lots of people crowding around as people tend to do. Well a police officer came running to clear the troop's path of onlookers. After this had been done the troop started to march out, towards the road and the line of onlookers. Well, the lead man was marching toward a particular group of onlookers. And marched towards them. And marched further towards them with no indication that they were going to stop. When the leader was about fifteen feet away from the onlookers they started to get nervous. At ten feet some started to back up. At five most were starting to freak out as they would assuredly be run over. At three feet the leader turned, stamped his foot and continued on his way. I am certain that man enjoyed doing that.

After this the huge crowd of people dissipated we went to get ourself a free tour ticket. After getting it we had an hour before it started so we went ahead and took the self guided tour around the buildings. We saw all the various monuments and the like that are placed around the buildings. The most beautiful building there is the Parliamentary Library. It is also the only original building which remained after the fire in 1916. You'll need to wait for my pictures or find pictures yourself, but it is quite the library.

Eventually we finished the walk and had a bit more time to spend before our tour started so we sat in a shady spot and listened to the musical bells in the tower. It is actually an instrument operated by directly connected pedals. It is quite the thing and are as capable as a piano. Except that some bells weight nearly three thousand pounds.

So our tour started and went much as you would expect. We saw the house of commons, the senate room, a couple other small rooms and the inside of the Parliamentary Library. The library is all one up inside with intricately carved wood where the rest of the building is stone.

Upon finishing the tour we hit the boutique, as they call the gift shop. We bought a couple of postcards and two maple syrup lollipops. These have got to be the sweetest lollipops I have ever tasted. Even Courteney with her sweet tooth can only have a little before putting it away. They are likely to last at least a week, if not two.

Leaving the Parliamentary buildings we went to wander a bit downtown. We ended up in a pedestrian mall and checked out a few shops. On one end of this mall was the War Memorial and the grave of the Unknown Soldier, which we saw. On the opposite end was the Currency Museum. This was on our list so we took a peek.

Inside they had the history of money with artifacts related to the times laid out. It was interesting to see, especially the progression in the quality of coins from rough lumps to roughly stamped coins to the well shaped coins of today. Also interesting was the early history of money in Canada. It started as beaver pelts with the Hudson Bay Company having all their prices in beaver pelt equivalents. At one time Quebec (as a colony of France) used playing cards because there wasn't enough coin to go around. It is somewhat amazing that money made of playing cards could have worked, but it did.

This museum also has a large selection of bills and coins from all modern times and places. This means that I saw a thousand dollar Canadian bill. There was even for a time a fifty thousand dollar Canadian bill that was used only by banks. It is a bit of a shame that electronic funds transfers like debit and credit cards have done away with the thousand dollar bill, but I guess life must move on.

Thus ended our first full day in Ottawa. We still have many more things to see and we'll be here a while more. My grandfather picked us up and took us back. We had a nice dinner of salad and spaghetti and then wiled the rest of the evening away chatting and reading. It sure is nice to not have to drive far to get places and not have to setup camp when we are done the day. We will be sure to enjoy this while we can.

KM 32165: Ottawa, Ontario

Well, we really did nothing today except drive from Port Perry to Ottawa taking the scenic route. It is some relatively nice farm country, but not much else can be said of it.

We have arrived at my Grandparents' house and will be staying here for a few days. They have lived here for decades and so the house is conveniently located minutes from downtown and most of the things we have to see in this city. It will be good.

The most noticeable change from the last time I was here is that the enormous maple tree in the backyard has been replaced by a small maple tree. Apparently the old one had begun to split and was threatening to fall own on the house in a stiff wind.

KM 31793: Port Perry, Ontario (Day 5)

Another day and another two attractions. The first was the Canadian War Heritage Museum in Brantford. This is a smaller museum, but it had a nice collection and lays out progressions of wars quite nicely. Again it was staffed by a veteran, which I still find full of pressure. It was good though. They apparently also have a couple of the old military vehicles that are still in running condition, but they hadn't been moved for the season.

Those heavy vehicles all have relatively weak engines. The largest vehicle there had a mere eighty horsepower and the most powerful engine was less than a hundred and fifty horsepower. You'd be hard pressed to find a new pickup truck with less horsepower than any two of the vehicles there combined. It sure makes you wonder why newer vehicles need so much power.

After leaving the museum we headed to the Reptile Zoo which we missed yesterday. It was fairly nice and had a reasonable collection of snakes, lizard and a few other things of interest. The most interesting was the albino alligator which was on loan from another zoo. There was also a pair of Nile Crocodiles, which are huge. The largest one was at least twelve feet long. Not something I'd like to meet in a river. Finally there was a group of half a dozen small alligators one of which was quite active and would follow us from one end of their tank to the other.

Unfortunately we were unable to get in contact with a number of the friends and family who live in the area. We have finished seeing the sites we have for the area and tomorrow we move onto the Ottawa region. We would like to stay and see the people we missed, but though we have no fixed schedule we cannot afford to stagnate.

KM 31405: Port Perry, Ontario (Day 4)

The resting place of the Stanley Cup is the Hockey Hall of Fame in Toronto. Today we went there. It is much as you'd expect with artifacts from all the ages of hockey and notable moments. The moments in history I found more interesting than most of the moments. There are also some video games, some hockey games (such as playing goalie against a video opponent) and two vaults of trophies. In one they display replicas of current minor hockey trophies and actual retired trophies. In the other they have not only the original Stanley Cup, but also most of the modern professional cups in either replicas or actual. The replicas are all quite good. When we were there the real Stanley Cup was in Pittsburgh. Had it in been in house we could have touched it and had our picture taken with it.

The Hall of Fame also has a gallery on international hockey, which I found interesting. Finally the Hall of Fame has an extensive gift shop of branded merchandise for most of the teams. If you ever want to get a pencil or alarm clock or scoreboard lamp for the fan in your life check this place out.

After we had gone through the Hall of Fame we headed to Niagra Falls. The Canadian side is absolutely better. Not only are the falls better, but the views are better. Of course we went on the Maid of the Mist boat and not only did we get wet, but we also enjoyed it. It is well worth the money.

We didn't spend long at Niagra, but enjoyed it nonetheless. After Niagra we tried to visit a Reptile Zoo, but we didn't have the address and our GPS Navigation box failed us. Tomorrow.

KM 30945: Port Perry, Ontario (Day 3)

Toronto is a very large city. Even if I hadn't known this before arriving there is no mistaking it from the streets. There is just something in the way big city streets are laid out, paved and how parkings works that is different from any other type of city.

Well today we went to Toronto. The original plan was to go up the CN Tower, then the Hockey Hall of Fame before going to Medieval Times for dinner. Unfortunately the line at the CN Tower to go up to the top observation point was well over an hour long and by the time we finished there it was nearly two in the afternoon. Medieval Times opened its doors to get tickets and seats and examine some of the things they have there at two thirty. So we walked to check out the hours of the Hall of Fame, but could not go in because we needed to find our way to our dinner-theatre.

There are a couple of things other than just the observation decks at the CN Tower. First there is the short movie describing its construction, design and a few anecdotes from people involved in the project. I didn't know, but the practical purpose of the tower is to be used at a broadcast and telecommunications tower. Build an antenna tall enough and you can transmit over skyscrapers. The movie is worth watching.

Then there is a simulation roller coaster which has a theme of a futuristic tree factor and mill. Unless simulator rides are your passion I wouldn't recommend spending time doing this. However, the ticket which gets you everything also allows you to skip a bunch of the line to the elevator up and so is likely worth the few extra dollars.

So we'd gone up to the main observation deck, looked around and then waited in line forever to go up to the tallest man-made observation deck in the world. After coming down it was time to go to dinner. Medieval Times is a dinner-theatre that is fairytale medieval themed. That is the knights are chivalrous, the king doesn't have gout and the evil prince plays fair most of the time. Even dinner is themed. Firstly there are no cutlery, you eat with your hands. I quite enjoyed tearing my half chicken to bits in order to consume its delicious flesh, but I imagine some in the audience didn't fair so well.

The show itself is a mix of horsemanship demonstrations, knight skill demonstrations, plot and tournament fighting. The horsemanship was pretty cool, they crab walked a horse and had a horse walking on only its hind legs. There was also precision formation riding. The knight skill demonstration was mostly done on horseback and included catching large rings with a lance at speed, spearing a target at speed, passing flags back and forth at speed and catching a small, four inch steel ring, at speed. I quite liked the latter one as it is truly difficult and only one got it.

I would rate Medieval Times as a place to go and a thing to see at least once.

KM 30763: Port Perry, Ontario (Day 2)

Today was a rest day in order to help Courteney get over her cold. We just mostly hung around my Aunt Debbie's place. We did go and see a movie, Transformers, which wasn't bad and then spent a couple of hours reading and enjoying the sun which came out while we were in the dark theatre.

So Courteney is well on the mend, but I am getting a cough now. Sometimes we just can't win.

KM 30702: Port Perry, Ontario

After an easy day of driving we arrived in Port Perry. Well, we arrived in Oshawa because we were looking for a movie theatre. We didn't find one, but we did find propane for fifty cents a litre, which is good.

Anyways, we eventually arrived at about two thirty to a warm introduction at my Aunt Debbie's house. She put on a nice dinner. Tyler even showed up with his family in tow. It was nice. We spent most of the evening chatting, though Courteney did help my Aunt with some knitting.

Tomorrow is going to be a rest day to ensure that Courteney is all better before we go to Niagra falls. She has been getting better, but isn't quite a hundred percent.

KM 30459: Parry Sound, Ontario

Yesterday was Canada Day. Most municipalities in Canada, especially the larger ones, have fireworks the evening of Canada Day. We were even lucky enough to arrive at a sizable town. Unfortunately because Courteney is sick we had to go to bed early and did not get to watch the fireworks.

Everybody in Ontario should own a canoe. Ontario is so full of lakes, streams. creeks and rivers that everybody should spend time touring their waters. Now of course this is much easier than it used to be since the invention of DEET. According to Wikipedia DEET was invented by the US military, so let it never be said that enormous military budgets haven't helped make the world a better place. Driving through this country I wish for two things. Firstly that I had a canoe on me. Secondly that Courteney didn't hate bugs or water near as much as she does. I'm going to need to find some outdoorsy friends.

Well today was another day full of driving. There was a short break in the middle of it when we saw the big nickel. For those who are unaware there is a giant nickel (made of nickel) in Sudbury, Ontario. Sudbury is known for its nickel mining so I suppose it makes sense. Anyways, this is a scale nickel approximately fifteen feet in diameter. It is something to see, especially if you are feeling touristy. Other than that we have really just driven. I am happy that tomorrow is the last complete day of driving for a while as I am getting tired of sitting and seeing nothing but highway for hours on end.

In other news Courteney was a bit better today than yesterday so she is likely getting better. Perhaps she'll learn to listen to me yet when I tell her how to improve her situation, whether that'd be by going to bed early or by looking where she is going to sit before sitting even if she left just a minute ago.

KM 29951: Sault Saint Marie, Ontario

Today was all about driving. Or at least, it was all about driving once I got a miserable Courteney to have a shower, eat her breakfast and dry her hair. Courteney is slow on the best of mornings. On mornings when she is sick she is downright glacial. She is still quite under the weather, though she is not so bad once she gets moving.

As I was saying today was all about driving, as can be seen from the number of kilometres we covered today. There isn't anything we really want to see in this area until we hit Sudbury and we are pushing to reach to reach family not only so we can mooch a warm bed (to heal Courteney of course), but also to see them. This is important because unlike the USA portion of the trip where I had the second half of funds meant for Canada to fall back on if things got expensive in Canada the trip ends when I run out of money. Courteney being sick doesn't help matters at all because she is even more miserable than normal when the thought of camping when it is raining comes up. And of course cheap motels cost two to four times as much as the average night of camping. It will also be nice to see the family I haven't seen for a number of years and won't for at least a couple more after I get married because I'll be too broke putting together a home.

Hopefully it stops raining soon, Courteney gets better soon and propane gets cheaper soon. I am quite surprised to see that propane isn't near as cheap out east as it is in the Lower Mainland. Back home it is always at least thirty cents per litre cheaper. Out here is is more like ten cents and I have even seen it more expensive at one station. There is something wrong with being charged nearly a dollar a litre for something which was fifty cents back home when I left.

Also, Lake Superior is very big.

KM 29345: Nipigon, Ontario

It turns out that Ontario is larger than I first believed. Or rather, what I call western Ontario is bigger than I first believed. In some ways it is perhaps a country in itself. It certainly has a much different feel in this part than the other parts of Ontario I have been in on past trips east. Perhaps this explains why people out here tend to not take into consideration the needs of the actual western provinces, simply because western Ontario is already so different that they consider it impossible that anything could be as different again.

Let me start at the beginning however. The reason I discovered that western Ontario is so much larger than I first believed is that Courteney is sick. It isn't anything serious, but Courteney doesn't take well to being sick and travelling is hard enough on her already. So I thought we might push our travel a bit in order to arrive at my Aunt Debbie's house sooner such that she would have a warm place to get better. I thought it couldn't be more than eight or nine hundred kilometres away from where we stayed last night, Dryden. Well was I wrong. It was actually sixteen hundred kilometres and our GPS claimed it would take twenty-two hours to drive it. So we'll see when we actually arrive. Until then I'm not sure how I'm going to keep her warm so she gets better.

Speaking of my GPS. The GPS contains within it a directory of services and businesses from what I believe to be all the world. How useful and accurate it is really depends on where you are. The maps themselves are pretty complete; as I mentioned previously it has at least some of the US Forestry Service roads in it. Something I may not have mentioned is that it contains mapping information for Europe and when searching for cities we have occasionally been forced to ignore results from Russia and Europe. Now in the USA the business directory seemed pretty good. If it didn't find something we were looking for in a particular small town, then it didn't exist. In Canada, however, it has been less good. It is fine for larger cities, but it falls down on the smaller ones. One example which was annoying is motels in Golden. We knew there were plenty of motels in Golden because we had driven by them, but when we went to search for a list of them to make a decision the GPS turned up only three.

It is always annoying and frustrating when you cannot trust your tools.

During the drive today we saw two moose, several deer (only one on the road) and one wolf. It is surprising how much wildlife you don't see in the Lower Mainland. I find myself wishing wildlife on the highway was a legitimate worry where I lived.

KM 28882: Dryden, Ontario

We started the day by visiting the Royal Canadian Mint. They have a tour where they show you the facilities and the various steps needed to mint coins. The tour is from an elevated and enclosed platform. I learnt a few things. The most interesting is perhaps that the Canadian Mint makes coins under contract for a number of countries, even a couple of coins for the USA. Also the original poppy quarter was the worlds first coloured coin. Finally most new Canadian coins are stainless steel with a coating of nickel and iron. Thinks I didn't know.

The mint also has a coin boutique, should you be a coin collector and want something new and special. They have a few other things. The most fun is the option to hold a solid gold brick. That's right, half a million dollars and twenty eight pounds of gold can be in your hands. Of course you can't leave with it, the chain and armed guard makes sure of that.

The mint also has an interesting souvenir. They've taken one of their old stamping machines and hooked it up to stamp a stainless steel blank with the mint logo and the current year.

And so we left the prairies and headed into Ontario. In preparing for the next bit of the trip we took a look at the map. The first odd thing you'll notice is that the people from Ontario call the western part of the province North Ontario. This is odd because we are driving at about the same level as Kelowna. The second and perhaps more odd thing is that not only is Ontario large, it is also empty. Find a road map of Ontario and look at the highways. There is nothing in the northern two thirds of the province. Yet if you look in similar places on the west coast you'll find highways well into the north, you can even drive all year round beyond the tree line. I truly wonder why there is nothing in northern Ontario.

KM 28496: Winnipeg, Manitoba

This just in: You can make butter with regular 1% milk if you shake it enough. I went to the truck this morning to fetch the things necessary for a cereal breakfast only to find that our milk had gone chunky. Not the chunky that happens when it has gone bad, but instead the chunky where all the fat in it had grouped together leaving whites blobs floating in a pale white liquid. We opted to not have cereal and instead had cereal bars. This is how we started our first full day in Winnipeg.

Today was a historical tour of two of the National Historic sites in the area. The first was Lower Fort Garry. This fort is a short distance north of the city and is an HBC fort first put into service in the late 1840's. They have perhaps eight buildings all decked out with period furniture and living articles. Most of the buildings are also manned by actor-guides in period dress. Each actor has a story to tell about what part they are playing in fort, but they are also quite knowledgeable with respect to facts that a person in their position during the operation of the fort would know. It was quite a nice experience and I highly recommend it to anybody in Winnipeg with half a day to kill.

After Fort Garry we paid a short visit to the Riel House. This is the historic house of the Riel family, most famously Louis Riel. It is setup for the year 1886, the year after his execution. There we found a guide in period clothing who was also quite knowledgeable. I'm not sure if this is normal for National Parks, but I like it. It was especially nice at the house because it turned eight hundred square feet of tourable house into an entertaining and educational hour long tour.

The most interesting portion of the Riel house is the minor historic point that it records. This is the French system of agricultural division. In the English system farms are made square in shape. The French system has long narrow farms instead. This is more sensible than it first appears because each farm was ensured access to a river for water and transportation. Each farm also had a variety of land available for use, from fertile land next to the river for vegetables to heavily forested land up to three kilometres from the river banks. Each farm tended to be 250 metres wide and three or more kilometres long. I think it is an extremely well thought out system that is good for self-sufficient or nearly self-sufficient family farms.

The final thing we had before retiring to our room for the evening was a slushie apiece. Winnipeg is the slushie capital of the world after all and it is only fitting. Things went about as expected with them, we both got a far amount of brainfreeze, I became fidgety (perhaps the 1.2 litre cup was a bad choice) and Courteney simply felt ill. It's all ended alright though so tomorrow is another day of adventuring.

KM 28387: Winnipeg, Manitoba

Today was one of those days in which we are forced to consider the more mundane tasks of life. Tasks such as laundry. So we spent a couple of hours sitting in a laundromat. While heading back to the truck for a snack while waiting for our laundry to finish I discovered that my grill had caught something slightly unusual. In addition to the normal variety of bugs and the odd leaf I discovered that I had caught a bird. It took over twenty-eight thousand kilometres of watching small birds flying into the highway to take flight, but I finally caught one. I am quite surprised it took this long.

After dealing with the necessary chores of laundry and scraping a bird out of my grill we arrived in Winnipeg. Before finding a room for the next couple of days we ate dinner. One of the things that I put on my list of things to do was to eat sushi in the prairies. Early in the second half of the trip we decided to wait until Winnipeg to do this because it is really in the middle of the country. So that is what we did for dinner. It was not bad, but as you might expect rather expensive. This did mean that they put the effort into the small things. Things such as having good green tea and raised floors with a hole around a table for your feet. This gave us the appearance of kneeling at the low Japanese tables. That was neat.

Now we are tucked away in a cheap motel for the next two nights. Two nights is really the minimum amount of time it is going to take to see all the stuff we need to see here. I had considered camping, but it was raining all day. Earlier in the day as we had just finished our laundry it started to rain hard. We had to fill up on propane before leaving. As we waited in line and for the man to pump the propane the skies opened and started throwing lighting and thunder around. I got soaked waiting outside.

For this reason we aren't camping now and will stay in a motel. It also means that we are guaranteed a shower. Unlike in the USA it appears that most campsites have pay showers. Now we never have much change because we use it for small purchases, parking and the like. Just something interesting.

KM 28001: Moossomin, Saskatchewan

This afternoon we left Regina and continued east. In the morning we saw the RCMP Depot and Heritage Centre. The Heritage Centre is basically a museum which explains the history of the force and has a number of artifacts from various eras.

It is an interesting thing to see, especially in the equipment that the early force was equipped with in their patrols of the north. They truly did not have much in the way of equipment. Perhaps the most useful things they had were their rifles and their red serge. In some ways I wish that the RCMP still wore the red as a regular uniform, but can see the argument for wearing a more modern uniform.

After perusing the museum there was the Sergeant's Parade, which we saw and a short guided tour of the base afterwards. It is an interesting tour to take. It doesn't travel far and wide into the base since it is an active training base.

We finished with the RCMP at around 2:30 in the afternoon. We then left town heading east. We ended up at a campground in a small town just west of the Manitoba border. When we arrived there was already a good swarm of mosquitoes milling about and the number only increased as the day wore on. We quickly put up the tent, cooked dinner and ate. In the end we finished up at around 6:30 and hid in the tent.

Now I have often heard that mosquitoes are bigger in the east and the north. I never quite believed them, but now I have seen it for myself. The mosquitoes we saw were about twice the size of the ones I have seen in the south western area of British Columbia. This is bad, but for the first time in my experience bug spray has been successful at keeping the mosquitoes at bay. Usually all it seems that bug spray does is annoy me. It certainly worked this time and was only 25% DEET. The bugs are taking their toll on Courteney though. She is seriously mentioning her desire to end the trip. She is going to go on as far as she can, but this evening she has claimed that Churchill, Manitoba is likely the end of the road for her. We shall see.

KM 27740: Regina, Saskatchewan

Today we arrived in the big city of Regina. To get here we had to drive about a hundred and fifty kilometres of road, most of it good. There were few bends and those that did exist seemed put in place more to keep drivers awake than to avoid anything.

The first thing we did upon arriving was visit the Royal Saskatchewan Museum. This is a relatively small museum that has the focus of the province of Saskatchewan. It covers the geology of the area, the native culture of the province and the environment through time. It starts with rocks, goes through natives, then through dinosaurs and the ice ages before arriving in more modern times with animals and environments you'd see today. It was nice to see.

By the end of it Courteney's back was hurting and my watch read 3:30 PM so we decided to get a room for the night and perhaps let Courteney lay down before dinner. Upon arriving I checked out the weather channel as I always do when it is available. It was now that I discovered that it was in fact 2:30. We had half the day left. Courteney had recently mentioned how she wanted to see the movie Up, while it was still in theatres. So we went to a theatre.

Upon arriving it turned out that the first showing was not until 6:30. We had a bunch of time to kill. To do so we went to the legislative buildings. Since we will be travelling through all the capital cities I thought it a good idea to at see all the buildings to round out the trip. We did this and then wandered around a garden nearby and eventually rested on a bench beside what I believe was a river for a while.

After tiring of this we travelled to a nearby pizza place for an early dinner. This we had and we finished up at just after five and still had lots of time left. We has passed a blood donation clinic on the way to the pizza place so I thought we'd stop by and I'd give some of my blood to kill time. Well we found it again, but it had closed at one PM today. I'd give blood more often if they weren't always closed whenever I have the time.

Well, after this we returned to the movie theatre and proceeded to waste forty-five minutes wandering the adjoining mall. When it came time to go and get our tickets, however, we were in a Walmart and had some difficulty returning to the mall. Yes, we got lost in Walmart. We eventually found our way out, watched the movie, enjoyed ourselves and then found ourselves back at our motel for the night. Tomorrow we see the RCMP museum and perhaps a tour of the grounds before we head out of town for Winnipeg.

KM 27497: Willow Bunch, Saskatchewan

The town of Moose Jaw is somewhat bigger than I expected. I expected a small town, but not one of the tiny farming towns we've seen around. Instead it is perhaps what counts for a medium sized town out here, they even have parking meters on the streets.

We went to see the tunnels of Moose Jaw. These are what started out as steam service tunnels and were later used both by Chinese immigrants and bootleggers. Both of these tours are historic in nature, but they are not the stuffy tour you would expect. Instead they are, more or less, tour plays. Of course nobody tells you this before the tour starts. The bootlegging tour is entirely a play, where the Chinese tour is only half a play and half a tour full of explanation.

We did the bootlegging tour first and I ended up being chosen (you didn't expect them to let people volunteer did you?) as Charlie, the regular there at the speak easy. Well we go through the club, Al Capone's office and bedroom. Then we meet the guard who brews, and sells, Al Capone's 195 proof private reserve.

It was interesting and well done. As a note it turns out that Charlie is a long time drinking buddy of the guard. It was fun.

The Chinese was more formal in places, but that is likely because acting out what the Chinese did would be difficult and involve burning our hands and days of backbreaking labour.

After doing this we left the city in search of a place to stay. We had some time so we decided to head a bit south. In the end we ended up heading down a badly maintained highway until we were merely a hundred kilometres from the border. On the way we passed through a lot of farmland and I drove around a lot of broken pavement.

Willow Bunch is a small farming support community. It is big enough to have a motel, at least one gas station (we've passed towns without those) and a pub. It also has a museum, but I'm not sure why. As we were very near town we saw a sign leading so a historic park with petroglyph's. It was only eighteen kilometres down a gravel road and it even looked like it'd been graded in the last couple of years. On the way there we passed through an even small town which goes by the name St. Victor. Victor is a one street town and though it is paved in town, it is gravel on either side. I'm really not sure how that town survives, but if anybody is interested there is a house for sale.

Anyways, we go up a hill to see these carved rocks. They are horizontal sandstone rocks and are exposed to the elements. Thus the actual carvings are faint. Somebody was nice enough to place a modern replica with deep carvings for us to examine in detail with ease. We saw and looked and we took in the view of the plains as this was on an actual hill which was perhaps two hundred feet above the rest of the plains. It was nice.

So back we head down eighteen klicks of gravel to the town we had decided to actually stay at. We head to the regional park and tour it trying to find a suitable spot. I was quite surprised at how nice the park was considering the area and what sort of money must be available. Well we set up and ate dinner. Just before cooking dinner we notice that the time on my watch and the time on my cellphone do not match up. They differ by an hour. For the time being we choose to go by my watch.

KM 27141: Swiftcurrent, Saskatchewan

Well, the world's tallest tepee isn't quite what I expected. Firstly I expected the world's tallest tepee in Medicine Hat to be enclosed, it wasn't. It was instead an exposed skeleton of structural steel. This was the first disappointment. Secondly I fully expected the tallest tepee to be a gift shop visitor centre or liquor store. All we got instead was half a dozen paintings and some plaques explaining the painting. All the paintings represented important events or elements of native culture in Alberta.

Other than that we didn't really do that much. The drive from Rosedale to Saskatchewan is really quite relaxing. The road is good and the small hills keep things interesting enough. Unfortunately sections of the highway become rather rough in Saskatchewan. Also the parts of Saskatchewan we have seen are mostly flat, but not near as flat as the jokes would have you believe.

Judging by the distance we've covered since we left and how much more distance we have left to cover I am beginning to wonder how long this leg will truly take. It may still take the three months that I first guessed because we have a number of stops and family to see, however it may be significantly less driving than I first anticipated because the stops we have have been rather tightly grouped so far. In some ways this is good, travelling less distance means it costs less and we can do more, but in other ways it is less good, the trip doesn't feel even with some driving and some seeing everyday but instead has lots of driving with lots of seeing. The lack of an even mix will probably be more tiring and I worry about that.

KM 26668: Rosedale, Alberta

Today we took our leave of Calgary. We woke up, showered and started to pack up while Grandma made us a nice breakfast. After eating and chatting for a short while we finished packing and loaded up the truck. We made it out at about ten thirty in the morning.

Our first stop was the Royal Tyrrel Museum. Because we got out a bit late and had to wait a short while to fill the truck we didn't arrive until nearly one o'clock. So we moved around the museum and took a look at the dinosaurs and related things. It was nice and enjoyable. Courteney had never been and I had last been there about eight years ago.

So we finished going through the museum at around four thirty. Courteney's back has been bothering her for the last couple of days and was bothering her as we left so we went directly to a campsite to stay for the night. We've had large meals for the last couple of days so we ate small with some soup and fruit.

KM 26486: Calgary, Alberta

Today was a lazy day. Because it was fathers' day the Barnerts put on a huge breakfast with omelets, potatoes, waffles and fruit. We gladly took part. We ate and then lazed around talking until about noon when Courteney and I began our trip back to my grandparents'. Once there I had a shower and read while Courteney baked a couple of strawberry-rhubarb pies.

Later family members started arriving for the dinner Grandma was putting on for us. The turkey was good. Eventually dessert came, we the delicious pies still warm from the oven and everybody was well fed. There was much conversation and, as it was a Sunday, they started to roll home around nine.

This was fine because both Courteney and I were quite tired. We went to bed shortly after the last group left.

KM 26442: Okotoks, Alberta

After a great breakfast of bacon and eggs we headed out to the museum we failed to find yesterday. It was actually called the Military Museums. It is a pretty nice museum with sections from most of the branches of the Canadian Military. It is too large to fully see in a single day. Instead we took