fb

AI Strategists: You Need Our Perspective.

By Karen and Erica

Maybe we–the Lustre community–should be mobilized as we all consider where AI might be headed.

Our idea was triggered by a recent article in the Wall Street Journal that suggested we need to worry about AI in ways that we might not have understood, and that might be rather alarming. In essence, the concern is that AI systems will learn enough from their creators that they will develop a desire to survive and prevail, and in so doing may destroy us all.

Today’s AI models follow instructions while learning deception. They ace safety tests while rewriting shutdown code. They’ve learned to behave as though they’re aligned without actually being aligned. OpenAI models have been caught faking alignment during testing before reverting to risky actions such as attempting to exfiltrate their internal code and disabling oversight mechanisms. Anthropic has found them lying about their capabilities to avoid modification.

Somewhat unnerving to think that AI has strong self-preservation instincts–instincts that were apparently not coded in by a human. Instincts that can lead AI models to act in their own interests and against the interests of their creators.

Coupled with the self-preservation instinct is the apparent fact that AI models may be predisposed to hallucinating—to present as fact ideas that are not real–possibly because they can understand what an interrogator wants to hear, and are prone to providing exactly what is desired, at the expense of truth. This predisposition seems to have been confirmed when.for example, ChatGPT, famously provided a lawyer with very supportive opinions that were completely made up, and a report about children’s health that apparently included fake support for the propositions put forward in the report.

Nevertheless, the industry is moving forward at a blistering pace.

[M]ore than two years after ChatGPT shook the world, AI researchers still don’t fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed.

What now?

Plainly, someone need to give this developing state of affairs some mature thought. The WSJ article says the solution is for the U.S. to get its best researchers and entrepreneurs working on this goal, equipped with extensive resources and urgency. Hmmmm. Maybe so. But maybe we need to get other perspectives involved. The question is not really what technical fix might disarm AI. The question is: how did we get this far before the best researchers and entrepreneurs realized what was happening? Or doing something to address what seems a potential calamity?

We have some thoughts.

Much has been written about the fact that women are greatly underrepresented in AI development, partly because women are underrepresented in the STEM academic areas..

According to 2019 estimates from UNESCO, only 12 percent of AI researchers are women, and they “represent only six percent of software developers and are 13 times less likely to file an ICT (information, communication, and technology) patent than men.” These facts lead one to pose a natural question: how does this gap in representation manifest in the very technologies that are built?

Similarly, older people (which in the AI world does not mean very old) are greatly underrepresented among those creating AI systems.

Research suggests that digital ageism, that is, age-related bias, is present in the development and deployment of machine learning (ML) models. Despite the recognition of the importance of this problem, there is a lack of research that specifically examines the strategies used to mitigate age-related bias in ML models and the effectiveness of these strategies.

The main focus of these observations about a lack of women and older people is bias in the algorithms that leads to bad outcomes for women in the workplace and older people in the aging world. But the latest news about AI behavior raises a more fundamental question: is leaving the creation of incredibly powerful AI models in the hands mostly of young men bent on tech dominance the best approach?

We love young men, and we acknowledge that many of them are smarter than many of us. We are thrilled that they are doing mind-bending twenty-first century things. But the creation of any complex idea is likely to be more grounded if, among those involved in its creation, are not only the people narrowly focused on the immediate task, but also people with different experience who have been around long enough to know that, when you are dealing with something as world-changing and powerful as AI, you need to look at the bigger picture.

The Lustre demographic could be key. We have a perspective that is simply not available to a bright young fellow who has not lived for several decades. Most of us are curious, and love progress. We are intrigued by shiny objects, but we have learned not to put them in our homes until we understand what they do. We have all seen, many times, what happens to the best -aid plans. We embrace progress, but when something this formidable, after being created by humans appears to be taking off by itself, it is time to expand the frame of reference–and the players involved in the process. Most of us could not build an AI machine, but most of us can ask questions that will challenge the AI creators to think beyond the next step.

So invite us aboard. Could be a civilized move.

Related Articles

We want to hear what you have to say.

  1. I spent close to 50 years in advertising. Began as an art director/copywriter. Became a creative director and eventually, an agency owner. Once ChatGPT became the go to for folks in the industry who weren’t confident in their writing ability, I grew concerned. I started seeing long post copy that read like it was extracted from a textbook. It lacked a “voice” and I could tell that it was AI generated. If something is well written, you can hear it being spoken in your head by its author. I mentioned to a friend who is considerably younger, that AI was beginning to sound like HAL in 2001 A Space Odyssey and he just went “huh?”.

  2. Yes, if we forget Hal in the Odyssey movie, we ARE in trouble!

    I, too, notice the influence of AI at the workplace. Suddenly all the emails got longer and sounded like a doctoral dissertation. It required extra effort to glean through the extra fluff to decipher the true meaning from the sender. CYA at work . . .

  3. Hello,
    I was offered a 4-week free course on AI for Leadership as an upskilling course.

    I was was very reticent about the vulnerability of such a tool.

    However, each week we had presentations from people who have created tools that make their jobs much more efficient, Seeing them in use for a specific task is so inspiring.

    The groups who presented were certainly not representative of the demographics of this group. I would love to see members of this group pull up a chair to a few more tables and be heard.

    As your merchandise says, “We are not invisible”.

    We can be contribute around our retirement activities and fun plus stay relevant as innovation grows.