I found myself inspired to think of where should the World Wide Web go in utilizing Artificial Intelligence.
I would like to tackle this from the information security front. Like many security people who grow up, I also start seeing security as a problem that goes beyond technology. I’m not referring now to non-technical solutions to solve technical security problems (like user awareness training), but the other way around: the non-technical security risks to people and societies that may call for technical solutions.
The first decade or two of the Web was devoted to making this information highway faster, more accessible, and support advanced information provisioning use-cases (audio, video, interaction, etc.) The following decade or two were devoted also to increasing the amount and diversity of that information and its sources, with Web 2.0 and user-generated content replacing the traditional waterfall model. The upcoming years should, in my opinion, be dedicated to improving the Web in areas that emerged as challenges by our successes of the first 30 years.
Continue reading "Using AI for improving the future of the Web"
When we hear of a new way in which artificial intelligence (AI) can risk humanity, or even just move someone’s cheese, the debate often changes to whether or not we should pause AI development until some impact analysis is carried out, or some regulation kicks in.
I do not think that pausing AI development is even a plausible option to be considered. History has shown that technology never stops. We have tried this in areas that are much riskier for humanity, and without much success. We should accept that Humanity is insufficiently consensus-driven, and the stakeholders in AI technologies have too many easy ways to bypass bans, and too many good reasons to do so. Yes, there is serious lobbying promoting this pausing option, but I would argue that most is for pausing productization, rather than development, and is, to a notable extent, led by players who are just late in the market and need some time to catch up.
Blockchains, DeFi, DAO, and Web 3.0 in general, all carry the message of decentralization, and particularly of decentralizing financial systems. Decentralization means, for the most part, eliminating the trusted authorities that are involved in various types of transactions. Those transactions could be agreements (to be facilitated by decentralized smart contracts) and the transfer of funds in general (to be facilitated by Bitcoins and alike). Centralization of financial services is considered evil, because the middlemen often have their own incentives, and even when they don’t, they often charge a lot for their essential role in the system. As we move into the decentralized era, we have smart contracts, which enforce themselves using machine code, without a trusted executor, and Bitcoins, which carry value and can be transferred between people without them having to trust any specific or state-owned service provider. Nice.
This post is neither for nor against decentralization. As I see it, nothing can be said against what is essentially an option. A decentralized system is considered as such because it does not require a centralized authority, not because it does not allow one to exist. If you find that you really miss a middleman, then you can always appoint one. If you want the state bank to manage your Bitcoins, then nothing would prevent that. You get an option, and having options is always good.
This post discusses how suitable the current decentralized financial systems are, considering the world they operate in. My take, as the title may disclose, is that while decentralized systems are an excellent idea and a worthy option, their current implementation suffers from shortcomings that we will just have to fix before they can become mainstream. There are many shortcomings, of course, and who am I to even enumerate them, so I will focus on one: the decentralized systems today assume too much perfection; it’s not that they don’t work well — it is that they don’t fail well.
Continue reading "Decentralized Finance that is too perfect for reality"
I recently read a good essay by Alex Gantman titled: “A Corporate Anthropologist’s Guide to Product Security”. It’s a year old, but I did not notice it before, and in any event, its contents are not time-sensitive at all. If you’re responsible for deploying SDLC in any real production environment, then you are likely to find much truth in this essay.
Continue reading "Recommended: A Corporate Anthropologist’s Guide to Product Security"
Every company that has both development teams and security teams also enjoys a healthy amount of tension between them. Specifics of the emotions involved may vary, but quite often security guys see developers as: not caring enough about security, focusing on short-term gains in features rather than on long-term robustness, and all-in-all, despite best intentions, still not “seeing the light”. Developers, in turn, often see their security-practicing friends as people with overly intense focus on security, which blinds them to all other needs of the product. They sometimes see those security preachers as people who maintain an overly simplistic view of the product design, and particularly of the cost and side-effects of the many changes they request for the sacred sake of security.
People of both camps are to a certain extent right, and to a certain extent exaggerating and not giving the other side enough credit. And yet, it doesn’t even matter where the truth lies, nor if there is truth at all. What matters is that there are two groups that are both essential for product success, and which should work towards a common goal: a product that has many appealing properties, including security.
The rest of this post presents tips for proper collaboration between security and development teams, specifically where it comes to setting and implementing security requirements. Due to my default affiliation with the security camp, the actions I prescribe are targeted primarily at the security people, but I hope that both developers and security practitioners can benefit from the high level perspective that I try to convey in the following five tips.
Continue reading "Getting security requirements implemented"
A few months ago I read an interesting post, which I felt compelled to write about. The post titled “Australian Court determines that an Artificial Intelligence system can be an inventor for the purposes of patent law” tells exactly what its title denotes. The case in question comes from the drugs industry, which has always been an avid user of the patent system, but one can easily see how the verdict can be applied to many (if not all) patent areas as well.
The article reads:
“In Australia, a first instance decision by Justice Beach of the Federal Court has provided some guidance: pursuant to Thaler v Commissioner of Patents (2021) FCA 879, an AI system can be the named inventor for an Australian patent application, with a person or corporation listed as the applicant for that patent, or a grantee of the patent.”
[...]
Worldwide, this is the first court decision determining that an AI system can be an inventor for the purposes of patent law.”
[...]
“The UK Intellectual Property Office (UKIPO), European Patent Office (EPO), and US Patent and Trademark Office (USPTO) each determined that an inventor must be a natural person.”
An appeal process is still ongoing, but this judgment still serves as an important milestone in the anticipated future of artificial intelligence, which bears enough resemblance to traditional human intelligence to demand similar treatment, first as art, and now also as the subject of patents.
I must admit that when I first read this article it seemed to me as a joke, and even a funny one at that. However, as I kept thinking about it, it made more and more sense. The purpose of this post is to take you through my thought process.
Just note that I am not a lawyer, not a patent attorney, and only express an opinion as someone who’s nowhere close to being authoritative on the subject.
Continue reading "Patents invented by Machine Learning"
On July 12th, I was interviewed on Security challenges of organizations deploying IoT. The recorded (and transcribed) video interview can be found here. For those who prefer a written abstract, here is the outline of what I said in reply to a short set of questions about the security challenges with IoT deployment, and the approach followed at Pelion to overcome them.
Continue reading "An interview on security challenges of organizations deploying IoT"
I recently participated in a discussion about the role of machine-generated text in the spread of fake news.
The context of this discussion was the work titled: How Language Models Could Change Disinformation. The progress made by the industry in the area of algorithmic text generation has led to concerns that such systems could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3 — an AI system that writes text, to analyze its potential use for promoting disinformation (i.e., fake news).
The report reads:
In light of this breakthrough, we consider a simple but important question: can automation generate content for disinformation campaigns? If GPT-3 can write seemingly credible news stories, perhaps it can write compelling fake
news stories; if it can draft op-eds, perhaps it can draft misleading tweets.
Following is my take on this.
Continue reading "Machine generated content helping spread fake news"
On May 12th, the Biden administration issued an Executive Order that was written to improve the overall security posture of software products that the government buys from the private sector. Recent events, such as the SolarWinds hack, contributed to the realization that such a move is necessary.
This Executive Order is a big deal. Of course, nothing will change overnight, but given the size and complexity of the software industry, as well as the overall culture behind software security (the culture of: “If the customer doesn’t see it — don’t spend money on it”), an Executive Order can probably yield the closest thing to immediate improvement that we could reasonably wish for. The US Government is a very large customer, and all major vendors will elect to comply with its requirements rather than cross it all off their addressable markets.
A lot has been written on how important it is for the government to use its buying power (if not its regulatory power) to drive vendors into shipping more secure products. Product security suffers from what could best be described as a market failure condition, which would call for such regulatory intervention.
To not overly repeat the mainstream media, I would like to focus on one unique aspect of the current Executive Order, and on how it can ignite a new trend that will change product and network security for the better. I’ll discuss true machine-readable security documentation.
Continue reading "One blessing of the Cybersecurity Executive Order"
An NFT (Non-Fungible Token) is a data structure that points at a particular data object in a unique way. See it as a way of naming digital objects, such as photos, texts, audio or video, in a way that allows referring to them with no ambiguity.
The ability to refer to data objects allows to “mention” them in transactions. This seemingly trivial ability, when combined with the ability to create immutable records of transactions (as provided by Blockchains), allows us to create immutable records that refer to data objects.
Technically, NFTs do not require blockchains. You could take a photo of a cat, create an NFT for this photo, which is essentially a unique pointer to (or: a descriptor of) it, and then go on to write a real contract on paper that says “this photo of a cat, bearing this unique ID, is hereby assigned to John Smith”, whatever this assignment means.
Blockchains and smart contract technologies allow for such digital agreements to be stored in a public immutable record that does not allow anyone to change it once it was written. The combination of NFTs and blockchain-based smart contracts thus allows us to securely record agreements that declare ownership of digital goods. If you have any file (photo, text, video, etc.), you can create an attestation that tells the entire world that you assign this file to be owned by whoever. What does this “ownership” mean? – Good question; but whatever it means, billions of dollars have already been paid towards such ownerships. Is this real? The money surely is, but is also the value?
Continue reading "On the value of NFT"