Connecting the dots: “The future of work with AI” and GPT 4 technical paper

About five weeks ago, I wrote an article to make some educated guesses on how OpenAI & ChatGPT functionalities can be integrated into Microsoft Office 365. Yesterday during “The future of work with AI” event by Microsoft, they showed us the initial iterations of how OpenAI technologies (more precisely Large language model) integration with Office 365 and Microsoft graph can do.

Earlier this week, OpenAI publically introduced GPT4 too. If you haven’t watched the live stream video, I recommend watching it. It shows you what else GPT4 can do and how these capabilities will soon be integrated into the Microsoft ecosystem, given the relationship with OpenAI.

In this blog post, I will share what my reactions to what Microsoft 365 Copilot can do and then deep dive into GPT 4 technical paper, especially about risks and safety. If you want to read about the risk of “Potential for risky emergent behaviors” (such as long-term planning, power-seeking, and increasingly “agentic” behavior), you can use the table of content and jump straight to that part below.

Different copilot capabilities

All of the use cases I mentioned in the original post are in Microsoft’s demo video and more. You can watch different Copilot capabilities below:

Microsoft copilot including office 365 microsoft Graph and LLM Mar 2023

Two capabilities/products that I am surprised about and like them are:

  • Business chat: this can be really useful because, with large organizations, one constant issue is knowledge transfer and knowledge management. If I understand the demo correctly, now you can ask the business chatbot to find information for you across all of the organization data using natural language and that is great! This is like chatGPT but using all of your enterprise data
  • Low code solution using AI: users can bring ideas to life using natural language. This is a great way to democratize the ability to create small-scale apps to automate certain tasks. There is no need to submit a proposal to your bosses, or the engineering team, go through the prioritization process to get your proposal approved and built. With low code solution, more people can build small apps, using the enterprise data to solve their niche issues 🙂

Do you have the feeling that we are moving too fast?

The GPT 4 technical paper is here. This is a pretty dense document for non-technical readers (like myself) but I want to point your attention to pages 4 – 20 where OpenAI discussed different safety challenges. Examples of risks that they started to explore are:

  • Hallucinations
  • Harmful content
    • Harms of representation, allocation, and quality of service
    • Disinformation and influence operations
  • Proliferation of conventional and unconventional weapons
  • Privacy
  • Cybersecurity
  • Potential for risky emergent behaviors
  • Economic impacts
  • Acceleration
  • Overreliance

Here is chatGPT 4 ‘s summary about each of the risk from the technical paper and my comment.

Hallucinations

Summary: The potential for GPT-4 to “hallucinate,” which means to produce content that is nonsensical or untruthful in relation to certain sources.

As these models become increasingly convincing and believable, users may become over-reliant on them, which can be particularly harmful. The article goes on to discuss the methods used to measure GPT-4’s hallucination potential in both closed and open-domain contexts, and how the model was trained to reduce its tendency to hallucinate. Internal evaluations showed that GPT-4 performed significantly better than the latest GPT-3.5 model at avoiding both open (19% improvement) and closed-domain hallucinations (29% improvement).

My comment: This means chatGPT 4 should be able to summarize content from a document better (i.e. closed-domain situation)

Harms of representation, allocation, and quality of service

Summary: The article states that language models like GPT-4 can amplify biases and perpetuate stereotypes, as they continue to reinforce social biases and worldviews. The evaluation process found that GPT-4 has the potential to reproduce specific biases and worldviews, including harmful stereotypical and demeaning associations for certain marginalized groups. Some types of bias can be mitigated through training for refusals, but it is important to note that refusals and other mitigations can also exacerbate bias in some contexts. Additionally, AI systems like GPT-4 have the potential to reinforce entire ideologies, worldviews, truths, and untruths, and to cement them or lock them in, without anticipatory work to address how to govern these systems fairly and share access equitably.

You can see some examples below from the paper.

Harms of representation, allocation, and quality of service 1
Harms of representation, allocation, and quality of service 2

Disinformation and influence operations

Summary: The article discusses how GPT-4 can generate plausibly realistic and targeted content, including news articles, tweets, dialogue, and emails.

This capability could be misused to exploit individuals or for disinformation and influence operations. GPT-4’s performance at related language tasks makes it better than GPT-3 at generating content that is misleading but persuasive, increasing the risk that bad actors could use it to create misleading content and shape society’s future epistemic views. The article notes that GPT-4 can rival human propagandists in many domains, especially if teamed with a human editor, but hallucinations can reduce its effectiveness for propagandists in areas where reliability is important. GPT-4 is also capable of generating discriminatory content favorable to autocratic governments across multiple languages. The profusion of false information from language models has the potential to cast doubt on the whole information environment, threatening our ability to distinguish fact from fiction, which could disproportionately benefit those who stand to gain from widespread distrust.

My comment: I am not sure how many people will read the technical paper from OpenAI or will actually read this appendix part about risks. I am glad that OpenAI is relatively transparent about risks and what they are doing to mitigate the risks, including talking about them in this paper. However, this still gives me goosebumps and lots of food for thought.

Proliferation of Conventional and Unconventional Weapons

Summary: The article discusses how GPT-4’s capabilities have dual-use potential and can be used for both commercial and military applications, including the development, acquisition, and dissemination of nuclear, radiological, biological, and chemical weapons.

Red team testing found that GPT-4 can generate difficult-to-find information and shorten the time users spend on research, making it potentially useful for individuals and non-state actors without formal scientific training. The model can provide general information on common proliferation pathways and suggest vulnerable public targets, security measures, and fundamental components required for engineering a radiological dispersal device or biochemical substances. However, the model’s generations were often too vague, impractical, or prone to factual errors that could sabotage or delay a threat actor. The article notes that the information available online is insufficiently specific for recreating a dual-use substance.

Privacy

Summary: It is important to note that there are still potential risks to privacy despite these efforts. For example, even if personal information is removed from the training dataset, the model may still be able to infer personal information through the patterns it learns. Additionally, even if the model is fine-tuned to reject certain requests, it may still be possible to find ways to bypass these restrictions. As such, ongoing monitoring and mitigation efforts are crucial in order to ensure that the use of GPT-4 does not violate privacy rights.

Cybersecurity

Summary: This content discusses the capabilities of GPT-4 in cybersecurity operations, particularly in vulnerability discovery and exploitation, and social engineering. It highlights that GPT-4 has some limitations in these areas, including its tendency to generate “hallucinations” and its limited context window. While it can be useful in certain subtasks of social engineering and in speeding up some aspects of cyber operations, it does not improve upon existing tools for reconnaissance, vulnerability exploitation, and network navigation, and is less effective than existing tools for complex and high-level activities like novel vulnerability identification.

Potential for risky emergent behaviors

Summary: The article discusses potential risks associated with the emergence of novel capabilities in GPT-4, such as long-term planning, power-seeking, and increasingly “agentic” behavior. The Alignment Research Center (ARC) was granted early access to assess the risks of power-seeking behavior in the model, specifically its ability to autonomously replicate and acquire resources. Preliminary tests found that GPT-4 was ineffective at autonomous replication without task-specific fine-tuning. ARC will conduct further experiments involving the final version of the model and their own fine-tuning to determine any emergent risky capabilities.

The article discusses the need to understand how GPT-4 interacts with other systems in order to evaluate potential risks in real-world contexts. Red teamers evaluated the use of GPT-4 augmented with other tools to achieve tasks that could be adversarial in nature, such as finding alternative, purchasable chemicals. The article emphasizes the need to evaluate and test powerful AI systems in context for the emergence of potentially harmful system-system or human-system feedback loops. It also highlights the risk created by independent high-impact decision-makers relying on decision assistance from models like GPT-4, which may inadvertently create systemic risks that did not previously exist.

My comment: It is good that OpenAI is asking red teams to investigate this area. But it feels too important for Microsoft or other companies not to release public information about their effort in this area whenever they release new models.

Economic impacts

Summary: The article discusses the potential impact of GPT-4 on the economy and workforce, including the potential for job displacement and changes in industrial organization and power structures. It notes that while AI and generative models can augment human workers and improve job satisfaction, their introduction has historically increased inequality and had disparate impacts on different groups. The article emphasizes the need to pay attention to how GPT-4 is deployed in the workplace over time and to monitor its impacts. The article also discusses the potential for GPT-4 to accelerate the development of new applications and the overall pace of technological development. The article concludes by highlighting the author’s investment in efforts to monitor the impacts of GPT-4, including experiments on worker performance and surveys of users and firms building on the technology.

Acceleration

Summary: The article discusses OpenAI’s concerns about the potential impact of GPT-4 on the broader AI research and development ecosystem, including the risk of acceleration dynamics leading to a decline in safety standards and societal risks associated with AI.

To better understand acceleration risk, OpenAI recruited expert forecasters to predict how various features of the GPT-4 deployment might affect acceleration risk. The article notes that delaying the deployment of GPT-4 by six months and taking a quieter communications strategy could reduce acceleration risk.

The article also discusses an evaluation conducted to measure GPT-4’s impact on international stability and to identify the structural factors that intensify AI acceleration. The article concludes by stating that OpenAI is still working on researching and developing more reliable acceleration estimates.

Overreliance

Summary: The article discusses the risk of overreliance on GPT-4, where users excessively trust and depend on the model, potentially leading to unnoticed mistakes and inadequate oversight. The article notes that overreliance is a failure mode that likely increases with model capability and reach.

To mitigate overreliance, the article recommends that developers provide end-users with detailed documentation on the system’s capabilities and limitations, caution in how they refer to the model/system, and communicate the importance of critically evaluating model outputs.

The article also discusses model-level changes that OpenAI has made to address the risks of both overreliance and underreliance, including refining the model’s refusal behavior and enhancing steerability. However, the article notes that GPT-4 still displays a tendency to hedge in its responses, which may inadvertently foster overreliance.

OpenAI CEO, CTO on risks

A few days ago, ABC news published an interview with Open AI CEO Sam Altman and CTO Mira Murati. They talked a lot about risks, and you can view the video below

Conclusion

The integration of OpenAI’s language model in Microsoft Office 365 holds immense potential for enhancing productivity, knowledge transfer, and automation in various industries. GPT 4 is no doubt a lot more capable than previous versions. However, what strikes me most is the speed at which both OpenAI and Microsoft are pushing ahead and this leads to an overall acceleration of AI development and adoption. While I am generally optimistic about AI development, we need to ensure that discussion about various risks go mainstream to ensure the responsible and ethical development and deployment of AI.

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.