What I learned at GlueCon 2023 — Tipping Points and Generative AI

adrian cockcroft
9 min readJun 2, 2023
The final slide of my GlueCon keynote featuring a sunset over a pool in Maui — picture by Adrian

I’ve presented at GlueCon many times over the last decade or so. It’s an unusual event, held at the end of May at an isolated hotel between Denver and Boulder Colorado, the agenda is curated by Eric Norlin, and the 2-day event is run by his wife Kimberley. The theme is whatever Eric thinks is interesting that year, but many of the same attendees and speakers return so there’s a kind of tribe of friends feel to it. It’s big enough to be interesting but not too big (this year had fewer attendees than some in the past). It’s friendly, diverse, has good food and is very much an in-person make new friends kind of event. There are no videos of the talks, so I tend to be a bit more experimental and try out new content at GlueCon.

This year the main focus was on Generative AI, ChatGPT and the implications of all the things that are going on in that space. There were also side tracks on Web Assembly and Observability. Previous years have covered APIs (which is where the GlueCon name came from) and Robotics and whatever seemed new and interesting that year. I usually learn a lot, and this year I went with some half formed ideas about Generative AI, and a bit of time playing with ChatGPT, and had a fairly deep dive into what’s happening in this space, in the talks and in the chats in-between.

Eric asked me to give one of the keynotes at the end of the first day, and invited local Denver people to join the conference for an AI focused meetup that included the final three keynotes, the drinks and snacks reception and some additional talks in the evening. My talk was on Innovation and Tipping Points, the first half was based on some content I’ve given before on how to get out of the way of innovation by speeding up time to value or idea to implementation. The second half was an exploration of some tipping points, when something that was expensive becomes cheap or something that was cheap becomes expensive. You need to be able to innovate fast enough to pivot or reinvent your business model and leverage the change. One example is that Netflix launched it’s streaming service in 2007, just at the point when the cost of streaming a movie over the network (which was dropping fast) became less than the cost of shipping a DVD. Netflix streaming content was relatively cheap to license when Netflix was small, but when it got to have more subscribers than the biggest cable TV operators the licenses hit a tipping point and cost more than making the content in the first place, and Netflix pivoted to making it’s own original content, and created a major new movie studio from scratch. Working from home is another tipping point, enabled by widespread deployment of laptops, home internet with capacity to run video (thanks in part to Netflix and other streaming services), and distributed productivity tools, then kicked over the line by the COVID-19 lockdown. My view on this is that execs who now want remote workers back in the office would be better off spending their time focused on improving their remote collaboration tools and culture, scheduling occasional concentrated in-person events, and cutting back on real-estate spending as fast as they can.

The maturing of various AI technologies has also hit some tipping points and is evolving extremely quickly week by week. The purpose of my talk was to get people to understand how fast they could innovate, and to use prior examples as patterns to detect and jump on emerging tipping point opportunities. Here’s my initial guesses at where these may be:

Most people are focusing on ChatGPT, Github Copilot and similar conversational AI tools, which have have become competitive with the average expert or developer, have very broad general knowlege, are remarkably good at responding to prompts, but which don’t have any sense of purpose, agency, or their environment and context, have a high error rate, hallucinate, and are often working on old information. In my talk I brought up a different development thread of AI in the self-driving car context, and in particular Tesla FSD beta, which I have been using for the last few months. If you haven’t seen it in action, you should watch the latest video by @AIDRIVR. FSD has a sense of purpose, a planning capability, has real time agency and responds to its environment via an ego model, predicting the behavior of pedestrians and other road users, and is being tuned to drive in a very human way, so that other road users interact with it as a predicable normal driver. FSD isn’t finished yet, but like ChatGPT’s exam results, it’s better than the average human in most situations, and it’s getting better quickly. There are other self-driving car developments that may be more advanced in some ways, but Tesla FSD beta has deployed to a few hundred thousand cars, so it’s progress is much more publicly visible, and the release notes give some insight into how it’s capabilities are structured.

One area where generative AI is already very competitive is the creative arts, generating images, artwork, essays, poems, advertising copy etc. where there is no wrong answer, and what is good or not is highly subjective. This is disrupting journalism, education, marketing, and search engine optimization, as well as causing concerns around deep fake images. There are already a few hundred AI authored books that have been published. There was a good talk on How to Build Responsible Systems While Leveraging Generative AI Capabilities by Sriram Subramanian of Microsoft, and I had several interesting conversations with him.

The current sweet spot for tools like ChatGPT is where there is a lot of documentation and consensus around a subject. As Uwe Friedrichsen discusses in his blog post ChatGPT Already Knows, the value of detailed knowledge used to be high, but is now being commoditized. This tipping point is going to impact process oriented “complicated” jobs like how to use programming languages and web services first, but is spreading to include “complex adaptive system” operations like driving cars over time. We’re already seeing the beginning of this direction with conversational programming and operations tooling like DoTheThing.ai by CtrlStack who were exhibiting at GlueCon. Problems occur when the subject moves away from the sweet spot “into the weeds” and ChatGPT starts to confidently hallucinate output. Some of the talks at GlueCon were about techniques for adding domain knowledge via re-training, prompting or plug-ins to maintain quality output.

Some of the things that were hard but will become easy are personalized art, music and stories (generated, not just selected like Spotify or Pandora). I also think that it may become easier to adopt new languages, frameworks, APIs and other tools if we can train popular AI platforms to know when and how to use them for specific purposes, rather than having to run a developer relations and marketing team at scale. On the other hand the sheer volume of content for popular ecosystems like Python may squeeze out alternatives. There’s some kind of Generative AI Optimization (GAIO) that’s going to become important, like how Search Engine Optimization (SEO) changed the way web pages were constructed to optimize for web crawlers feeding search engines. We may even see road signs and intersection layouts optimized to make them work better for self driving cars, as the proportion of self driving cars increases on the roads.

I went to several talks and snapped pictures of a few slides that looked interesting. Here’s a list of models that helps make sense of some of the terminology that’s floating around.

CtrlStack are building AI driven DevOps automation tools and they showed a few examples.

Sriram Subramanian gave a talk in the evening meetup that provided a useful summary of the landscape.

Rob Hirschfeld of RackN had this perspective on the impact of AI on his domain of infrastructure automation.

Tristan Zajonc, CEO of Continual.ai talked about using generative AI to build products.

Russell Kaplan of Scale AI talked about prompt engineering. My opinion is that prompt engineering is a short term problem, over time the systems we actually use will be pre-loaded with prompts and have conversations with us to build the goals and contexts that we’re currently injecting via prompts.

Joe Shockman of Grounded AI and Allen Romano of Logoi talked about how to prevent hallucinations.

They also provided a useful link to some references on the subject.

Chai Atreya of Alteryx did an impressive live demo of using ChapGPT as a guide to developing some analysis in a Python notebook. Starting with raw data in a file, and some idea of what outcome he wanted, ChatGPT was able to provide guidance and code to paste into the notebook that did a good job.

Final thoughts… this blog post took me a week to finish writing, and it’s already out of date. Developments in generative AI are moving faster than most people who are part of it can keep up with, so beware of any claims you see about this area, and respond by asking how many weeks old that claim is. In particular, it appears that the trend to open source models is accelerating, and they are getting better more quickly than the well known ChatGPT, Bing and Bard services. Another trend is that even though the size of models is increasing, the training costs are coming down by orders of magnitude, and by starting with an open source model, good results can be obtained by anyone with a small amount of hardware. If you also think things are moving fast now, then be ready for an acceleration, Over the last few months VC firms have invested billions of dollars in this space, and the results of that infusion haven’t really played out yet.

I think that people saying that ChatGPT et. al. aren’t really intelligent, will have to revise what they mean by intelligent, because human intelligence is continually re-defined as whatever computers haven’t got good at doing yet. When the kind of capabilities that FSD has are combined with generative AI, it would have a goal, a plan to get there, an ego model for it’s own safety and actions, model the egos it’s interacting with, and humanize those interactions. We are going to have to get used to the idea that the entities we are interacting with, whether via email, social media, news and entertainment, or on the roads are increasingly AI operated, and we won’t be able to tell. Some kind of mandatory labeling seems to be needed. There’s talk of regulation, but it’s clear that politicians don’t understand this area well enough to address the issues successfully. Finally here’s a labelling idea I came up with — maybe we should make cars flash their hazard warning lights slowly when they are being self driven, so that other drivers can tell?

--

--

adrian cockcroft
adrian cockcroft

Written by adrian cockcroft

Work: Technology strategy advisor, Partner at OrionX.net (ex Amazon Sustainability, AWS, Battery Ventures, Netflix, eBay, Sun Microsystems, CCL)