Yesterday, while I was composing my article Trusted BrandsI had been doing a little hunting through my blog archives, in order to link back to each of the articles categorized under”Trust”. In the process of doing this, I went back and really re-categorized some older articles that fell under that category, but were not appropriately marked. In the process of doing this, I came across a whole lot of posts from 2013 I had imported from my old Tumblr site, but were just saved as drafts instead of printed posts.
So, I did a small test with one of these — hit Publish and assessed it looked right. Then, then, I did a bulk-edit with about 15 articles, selecting them all and changing the status from”draft” to”published”.
This didn’t have the intended effect.
Rather than those articles showing up in the archives under 2013, they had been published as of yesterday. So now I have 15 posts from 2013 showing up on peak of the blog like I wrote them .
That wouldn’t have been a real problem on its own — that the real problem stemmed that due to our automatic”content system” (that I built, mind you) inside the USV group, those articles did not just appear on my website, they showed up on the USV Team Posts widget (exhibited here, on Fred’s site and on Albert’s blog), they revealed (through the widget) at Fred’s RSS feed, which feeds his daily newsletter, and burst notifications were sent out through the USV network slack. Further, some elements of this system (specifically, the consolidated USV group RSS feed, which can be powered by Zapier) is not easily changeable.
Due to the way this occurs to be installed, all those triggers occur automatically and in real life. As Jamie Wilkinsoncommented to me this morning, it’s unclear if this is a characteristic of a bug.
Point is: real time automation is truly nice, when it functions as intended. Every day for the last couple of decades, articles have been flowing through the exact same system, and it has been great.
However, as this (admittedly quite minor) incident shows, real time, automatic, interconnected systems carry a specific type of failure risk. In this specific instance, there are a couple of common sense safeguards we can build into protect against something like this (namely: a delay in the consolidated RSS feed in picking up articles, and/or a simple way to edit it post-hoc) — perhaps we’ll get to those.
But I also consider this in the area of crypto/blockchain and smart contracts, where a key feature of the code is that it’s automatic and”unstoppable”. We’ve already seen some high-profile instances where unstoppable code-as-law may cause some hard situations (DAO hack, ETH/ETC hard disk, etc), and will certainly see more.
There’s a whole lot of value and power in automatic, unstoppable, autonomous code.