I finally got to read the book “Getting Things Done” by David Allen. I have read quite several books on time management and related topics, so I was almost surprised that it took me so long to get to this particular one, which is considered by many as a classic. I got to this book more or less by mistake. A while back I started using a command-line task management tool called TaskWarrior, another old Linux tool that has been known to anyone but myself for decades. This tool referred to the “Getting Things Done” methodology that was taught in the book, so I ended up reading the book as well, out of appreciation for some of the features that were inspired by this book.
I am not going to summarize this book now, nor the GTD methodology. There are dozens of write-ups about it by now. I am also not going to describe that TaskWarrior program, because this would not be the best use of your time or mine. Instead, I will discuss my most important findings: those relatively novel ideas of task management that I adopted from either the book or the software, or even just from my own experience with the two (even if not directly taught by any of these). So it’s not really a book review, but just my own takeaways that were either written in the book or were somehow inspired by the book, or by the software that I am using.
Continue reading "Getting Things Done (GTD) and what I made of it"
Blockchains, DeFi, DAO, and Web 3.0 in general, all carry the message of decentralization, and particularly of decentralizing financial systems. Decentralization means, for the most part, eliminating the trusted authorities that are involved in various types of transactions. Those transactions could be agreements (to be facilitated by decentralized smart contracts) and the transfer of funds in general (to be facilitated by Bitcoins and alike). Centralization of financial services is considered evil, because the middlemen often have their own incentives, and even when they don’t, they often charge a lot for their essential role in the system. As we move into the decentralized era, we have smart contracts, which enforce themselves using machine code, without a trusted executor, and Bitcoins, which carry value and can be transferred between people without them having to trust any specific or state-owned service provider. Nice.
This post is neither for nor against decentralization. As I see it, nothing can be said against what is essentially an option. A decentralized system is considered as such because it does not require a centralized authority, not because it does not allow one to exist. If you find that you really miss a middleman, then you can always appoint one. If you want the state bank to manage your Bitcoins, then nothing would prevent that. You get an option, and having options is always good.
This post discusses how suitable the current decentralized financial systems are, considering the world they operate in. My take, as the title may disclose, is that while decentralized systems are an excellent idea and a worthy option, their current implementation suffers from shortcomings that we will just have to fix before they can become mainstream. There are many shortcomings, of course, and who am I to even enumerate them, so I will focus on one: the decentralized systems today assume too much perfection; it’s not that they don’t work well — it is that they don’t fail well.
Continue reading "Decentralized Finance that is too perfect for reality"
I recently read the book Decisive by Chip Heath and Dan Heath. This is one of the better growth books I’ve read lately, because it nicely combines scientific truths with actionable guidelines. Most growth books are either purely motivational, repeating shallow inspirational mantras with small tweaks, or they present solid logic that explains how things could be better, just without much hints on how one can put this logic into practical use. This book, on the other hand, explains well-substantiated pitfalls in our decision-making logic and also offers simple mental hacks to help us overcome those pitfalls. I also liked that each chapter concludes with a single-page summary that makes it easy to recap what was taught and the conclusions of each chapter. I find this immensely useful because I’m the type of person who reads very little each day, and not every day, so reading a single chapter can sometimes take me weeks.
The rest of this post lists my key takeaways from this book.
Continue reading "Book review: "Decisive""
I recently read a good essay by Alex Gantman titled: “A Corporate Anthropologist’s Guide to Product Security”. It’s a year old, but I did not notice it before, and in any event, its contents are not time-sensitive at all. If you’re responsible for deploying SDLC in any real production environment, then you are likely to find much truth in this essay.
Continue reading "Recommended: A Corporate Anthropologist’s Guide to Product Security"
Every company that has both development teams and security teams also enjoys a healthy amount of tension between them. Specifics of the emotions involved may vary, but quite often security guys see developers as: not caring enough about security, focusing on short-term gains in features rather than on long-term robustness, and all-in-all, despite best intentions, still not “seeing the light”. Developers, in turn, often see their security-practicing friends as people with overly intense focus on security, which blinds them to all other needs of the product. They sometimes see those security preachers as people who maintain an overly simplistic view of the product design, and particularly of the cost and side-effects of the many changes they request for the sacred sake of security.
People of both camps are to a certain extent right, and to a certain extent exaggerating and not giving the other side enough credit. And yet, it doesn’t even matter where the truth lies, nor if there is truth at all. What matters is that there are two groups that are both essential for product success, and which should work towards a common goal: a product that has many appealing properties, including security.
The rest of this post presents tips for proper collaboration between security and development teams, specifically where it comes to setting and implementing security requirements. Due to my default affiliation with the security camp, the actions I prescribe are targeted primarily at the security people, but I hope that both developers and security practitioners can benefit from the high level perspective that I try to convey in the following five tips.
Continue reading "Getting security requirements implemented"
A few months ago I read an interesting post, which I felt compelled to write about. The post titled “Australian Court determines that an Artificial Intelligence system can be an inventor for the purposes of patent law” tells exactly what its title denotes. The case in question comes from the drugs industry, which has always been an avid user of the patent system, but one can easily see how the verdict can be applied to many (if not all) patent areas as well.
The article reads:
“In Australia, a first instance decision by Justice Beach of the Federal Court has provided some guidance: pursuant to Thaler v Commissioner of Patents (2021) FCA 879, an AI system can be the named inventor for an Australian patent application, with a person or corporation listed as the applicant for that patent, or a grantee of the patent.”
Worldwide, this is the first court decision determining that an AI system can be an inventor for the purposes of patent law.”
“The UK Intellectual Property Office (UKIPO), European Patent Office (EPO), and US Patent and Trademark Office (USPTO) each determined that an inventor must be a natural person.”
An appeal process is still ongoing, but this judgment still serves as an important milestone in the anticipated future of artificial intelligence, which bears enough resemblance to traditional human intelligence to demand similar treatment, first as art, and now also as the subject of patents.
I must admit that when I first read this article it seemed to me as a joke, and even a funny one at that. However, as I kept thinking about it, it made more and more sense. The purpose of this post is to take you through my thought process.
Just note that I am not a lawyer, not a patent attorney, and only express an opinion as someone who’s nowhere close to being authoritative on the subject.
Continue reading "Patents invented by Machine Learning"
I recently got a US patent application granted by the US Patent and Trademark Office. The patent bears the title “System, Device, and Method of Managing Trustworthiness of Electronic Devices”.
Continue reading "My new patent on device trustworthiness measurement"
On July 12th, I was interviewed on Security challenges of organizations deploying IoT. The recorded (and transcribed) video interview can be found here. For those who prefer a written abstract, here is the outline of what I said in reply to a short set of questions about the security challenges with IoT deployment, and the approach followed at Pelion to overcome them.
Continue reading "An interview on security challenges of organizations deploying IoT"
I recently participated in a discussion about the role of machine-generated text in the spread of fake news.
The context of this discussion was the work titled: How Language Models Could Change Disinformation. The progress made by the industry in the area of algorithmic text generation has led to concerns that such systems could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3 — an AI system that writes text, to analyze its potential use for promoting disinformation (i.e., fake news).
The report reads:
In light of this breakthrough, we consider a simple but important question: can automation generate content for disinformation campaigns? If GPT-3 can write seemingly credible news stories, perhaps it can write compelling fake
news stories; if it can draft op-eds, perhaps it can draft misleading tweets.
Following is my take on this.
Continue reading "Machine generated content helping spread fake news"
On May 12th, the Biden administration issued an Executive Order that was written to improve the overall security posture of software products that the government buys from the private sector. Recent events, such as the SolarWinds hack, contributed to the realization that such a move is necessary.
This Executive Order is a big deal. Of course, nothing will change overnight, but given the size and complexity of the software industry, as well as the overall culture behind software security (the culture of: “If the customer doesn’t see it — don’t spend money on it”), an Executive Order can probably yield the closest thing to immediate improvement that we could reasonably wish for. The US Government is a very large customer, and all major vendors will elect to comply with its requirements rather than cross it all off their addressable markets.
A lot has been written on how important it is for the government to use its buying power (if not its regulatory power) to drive vendors into shipping more secure products. Product security suffers from what could best be described as a market failure condition, which would call for such regulatory intervention.
To not overly repeat the mainstream media, I would like to focus on one unique aspect of the current Executive Order, and on how it can ignite a new trend that will change product and network security for the better. I’ll discuss true machine-readable security documentation.
Continue reading "One blessing of the Cybersecurity Executive Order"