We are at the beginning of the beginning of software systems being written and rewritten with AI at the center, and even being written by AI itself. The effects of this will touch every industry. It will be a boon to society by way of taking on workloads that are not suitable for humans and take the shackles off of creative endeavors. In short, everything will change.
Additionally, some of those changes will be risky and harmful to society.
Every technology throughout history offers a mixture of progress and peril. AI is no different.
VAIL’s mission is to embrace this technology and integrate transparency into the new architecture that has AI at the center. We should know which models we are working with, what the models’ creator’s intention were, and if those intentions align with our use cases.
As AI will touch everything, there needs to be a credibly neutral public infrastructure to prove and verify that AI models, model developers, and end-users are aligned with each other’s best intentions.
We’ve attempted to explain our reasoning in full detail in four parts
Our hypothesis about the growth of AI systems and some of the foreseeable challenges ahead.
A quick intro to the guarantees needed for a scalable assurance model for AI.
A breakdown of why we think traditional governance / regulatory approaches won’t work for AI.
A high level overview of the technology that can unlock verifiability of AI systems.