Can chatGPT help us to describe and interpret images?

One possible way to connect Breughel’s painting “The blind leading the blind” to the concept of artificial intelligence is to consider the idea of blindly following or trusting in something or someone without fully understanding it. In the painting, the figures are blindly following each other and as a result, all of them end up falling into a ditch.
This can be compared to the potential dangers of blindly trusting or relying on artificial intelligence without fully understanding how it works or considering its limitations. Just as the figures in the painting blindly follow each other and ultimately suffer consequences as a result, blindly trusting artificial intelligence without proper oversight or understanding could also lead to unintended negative consequences.
Therefore, it is important to approach the use of artificial intelligence with caution and to ensure that it is being used ethically and responsibly, rather than blindly trusting or following it without fully understanding its capabilities and limitations.

What sources have you used for this interpretation?

As a language model AI, I do not use sources like a human researcher would, I have been trained on a diverse set of texts, therefore the interpretation provided is based on the understanding of the painting and the concept of Artificial Intelligence that my training data has provided me with. The painting “The blind leading the blind” is a well‐known painting by Pieter Bruegel the Elder and its interpretation as a commentary on human folly and the dangers of blindly following authority or tradition is a common one. The analogy between the painting and the concept of Artificial Intelligence is also a common one, as both involve the dangers of blindly trusting and following something without fully understanding it.

But “training data” can be regarded as sources. What is the difference between being trained on a set of texts and on a set of sources?

You are correct that the data used to train a language model like mine can be considered as “sources.” The main difference is that the training data used to train a language model like mine is a large, diverse set of text data, whereas a researcher would typically use a smaller, more targeted set of sources that are specifically chosen to address a particular research question or topic.

The training data used to train a language model like mine is a diverse set of texts, which includes books, articles, websites, and other written materials. This data is used to teach the model how to understand and generate text, and it allows the model to have a broad understanding of a wide range of topics and ideas.

On the other hand, a researcher would use a smaller, more targeted set of sources that are specifically chosen to address a particular research question or topic. These sources are usually carefully selected and evaluated for their relevance and reliability, and they are used to build an argument or support a specific claim.

In summary, the main difference is that the training data for a language model is a large and diverse set of texts, whereas a researcher would use a smaller and specific set of sources.

But how would a researcher select this more targeted set of sources from the large, diverse set of text data?

A researcher would typically select a targeted set of sources using a process called literature review. Literature review is a process of identifying, evaluating, and synthesizing the existing research on a specific topic.

This process typically begins by identifying key terms and concepts related to the research question or topic, and then using these terms and concepts to search for relevant sources in databases, journals, and other sources of academic literature. The researcher would then evaluate the relevance, credibility, and reliability of the sources that are found, discarding sources that are not relevant or are not of sufficient quality.

Once the researcher has identified a set of relevant and credible sources, they would then read and analyze these sources in order to gain an understanding of the existing research on the topic, identify gaps in the literature, and identify any conflicting or contradictory findings. The researcher would then use this information to develop a more focused research question and to design the research study. Throughout this process, the researcher would also cite the sources used in the research, providing proper attribution and allowing others to verify the research and build upon it.

In summary, a researcher would select a targeted set of sources by using a process called literature review, which involves identifying, evaluating, and synthesizing the existing research on a specific topic. This process allows the researcher to identify relevant and credible sources, gain an understanding of the existing research on the topic, and design a research study that addresses the research question.

But if most relevant literature is found online and search engines and AI systems are used for filtering and selection, should the classification system that is used to filter those results, not be transparent?

It is true that a lot of academic literature is now available online, and search engines and AI systems can be used to help filter and select relevant sources. However, it is important to note that these systems are not always transparent in how they work and how they rank or filter the search results.

The use of AI systems for filtering and selection of sources raises important questions about the transparency, bias, and interpretability of these systems. The lack of transparency can make it difficult for researchers to understand how and why certain sources are being ranked or filtered, and it can also make it difficult to identify and correct any biases or errors that may be present in the system. Therefore, it is important for researchers to be aware of the limitations and potential biases of the search engines and AI systems that they use, and to use multiple sources and methods to verify and validate the results. Researchers should also be aware that the classification system used to filter the results, may not always be transparent, and should be critical of the results and question its reliability.

In summary, while AI systems can be useful tools for filtering and selection of sources, it’s important to be aware of their limitations and potential biases, and to use multiple sources and methods to verify and validate the results. And it’s also important to be aware that the classification system used to filter the results, may not always be transparent.

Does this hold even stronger for disciplines that rely on visual sources, for example Art History?

Yes, the issues of transparency, bias, and interpretability of search engines and AI systems may be even more pronounced in disciplines that rely on visual sources, such as art history.
In art history, visual sources such as paintings, sculptures, and photographs are often the primary evidence used to understand and interpret the past. However, the process of identifying, selecting, and analyzing visual sources can be time‐consuming and labor‐intensive, especially when dealing with large collections of artworks.

AI systems have the potential to help automate some of these tasks, such as image recognition, object detection, and image search. However, the use of AI systems in art history also raises important questions about the transparency, bias, and interpretability of these systems. For example, if the AI system is trained on a biased dataset, it may not be able to recognize or identify certain types of artworks or art styles, which can lead to errors and inaccuracies in the results.

Therefore, it is important for art historians to be aware of the limitations and potential biases of the AI systems that they use, and to use multiple sources and methods to verify and validate the results. They should also be critical of the results and question its reliability.
In summary, the use of AI systems in disciplines that rely on visual sources, such as art history, can be beneficial in terms of automating some of the time‐consuming tasks of identifying, selecting, and analyzing visual sources. However, it’s important to be aware of the limitations and potential biases of these systems, and to use multiple sources and methods to verify and validate the results, and question its reliability.

Would it then not be a good idea to use a standardized classification system for the documentation of image content?

Using a standardized classification system for the documentation of image content can be a good idea, as it can help to ensure consistency and accuracy in the way that images are described and indexed. A standardized classification system can provide a common language and framework for describing and organizing images, which can make it easier for researchers to find and access the images that they need.

For example, in art history, a standardized classification system could be used to describe the subject matter, style, medium, and other important characteristics of a work of art. This would make it easier for researchers to search for and find artworks that match their research interests, and it would also make it easier to compare and analyze works of art from different periods and cultures.

Additionally, a standardized classification system can also help to reduce the potential for bias in the way that images are described and indexed, by providing a set of objective criteria for describing and organizing images.

However, it’s important to note that the choice of a standardized classification system should be based on the needs and priorities of the field and should be flexible enough to adapt to new discoveries and new ways of looking at the data. Also, it’s important to keep in mind that no classification system is completely objective, and some level of interpretation is always required when classifying images.

In summary, using a standardized classification system for the documentation of image content can be beneficial in terms of consistency, accuracy, and ease of access to image data, but the choice of a standardized classification system should be based on the needs and priorities of the field, be flexible and considered as an ongoing process.

Would Henri van de Waal’s Iconclass system be a good choice as a standardized classification system?

The Iconclass system, developed by Henri van de Waal, is a widely used standardized classification system for the documentation of image content, particularly in the field of art history and cultural heritage. Iconclass is a hierarchical and polyhierarchical system that uses a combination of alphanumeric codes and keywords to describe and organize images.

The Iconclass system has been widely adopted by museums, libraries, and other institutions, and it is considered a valuable tool for cataloging and retrieving images. The system is also flexible and adaptable, allowing for new codes and terms to be added as needed.
One of the strengths of the Iconclass system is its ability to provide detailed and precise descriptions of images, which can be very useful for art historians and other researchers who are interested in particular aspects of a work of art. Additionally, Iconclass is also designed to allow for multiple interpretations of the same image, this feature allows researchers to look at an image from different perspectives. However, one potential limitation of the Iconclass system is that it is based on a European art historical perspective, which may not be entirely appropriate for images from other cultures or periods. This can be addressed by using the system in combination with other classification systems or by supplementing the system with additional terms and codes that are more appropriate for non‐European images.

In summary, Iconclass system is widely used and considered a valuable tool for cataloging and retrieving images, particularly in the field of art history and cultural heritage. It provides detailed and precise descriptions and allows for multiple interpretations of the same image. However, it’s important to keep in mind that it is based on a European art historical perspective and may not be entirely appropriate for images from other cultures or periods, it should be used in combination with other classification systems or supplemented with additional terms and codes.

But would you agree that as the Iconclass system is deposited in the public repository Github, it can be edited and expanded to also cover objects from a wider cultural scope?

Yes, the Iconclass system is available in a public repository on Github, which allows users to access and edit the system’s codes and terms. This feature makes it possible for the Iconclass system to be expanded and adapted to cover a wider range of objects and images, including those from cultures and periods that were not originally included in the system.

Users can add new codes and terms, as well as modify existing ones, based on their own needs and priorities. For example, users could add new codes and terms that are more appropriate for describing and organizing non‐European images, or they could modify existing codes and terms to better reflect the specific characteristics of different cultures and periods.
Additionally, the Iconclass system can be used in combination with other classification systems, such as the Library of Congress Classification system or the Art & Architecture Thesaurus, to provide a more comprehensive and nuanced description of images. This allows researchers to look at an image from different perspectives, and to take into account the cultural and historical context in which the image was created.
In summary, the Iconclass system being available in a public repository on Github allows for users to edit, expand, and adapt the system to cover a wider range of objects and images, and to better reflect the specific characteristics of different cultures and periods. It can also be used in combination with other classification systems to provide a more comprehensive and nuanced description of images.

Thank you for your insightful comments …

would have been the proper way to end this conversation - had it indeed been a conversation.
In fact, although it seems a perfectly reasonable dialogue about a question that may not be very profound, but is not trivial either, all the “answers” are generated by the chatbot. They can easily be mistaken for the answers given by a thoughtful, knowledgeable human respondent, but they are not.

The quality of the large language model (LLM) at the core of the application makes it pretty hard to realize that I was not asking a human for an opinion or an insight. What I was doing instead at the outset of our dialogue, was offering a computer program the following input: “given the statistical distribution of words in the vast public corpus of (English) text processed by the LLM, what words are most likely triggered by the keywords and phrases in the question?” (a formula borrowed from M. Shanahan, Talking about Large Language Models).
And the communication algorithm made the response quite convincingly anthropomorphic by inserting the descriptive detail “in the painting, the figures are blindly following each other”, suggesting that the bot has actually “seen” the picture, which of course it hasn’t.

To generate the initial response and a conversation of this quality, OpenAI had to build a colossal dataset of texts and classify its content to make sense of the information it contains. We do not know the precise composition of the dataset. It is not publicly available - “OpenAI” is not that open - but even if it were, it would be far too large to be evaluated by human users. That is precisely why AI applications like this one are being developed in the first place. The pace at which the internet’s datapool is growing, keeps accelerating, so the gap between what is aggregated and what a human brain can process is getting wider all the time.
And that development will not stop. This accelerating expansion also holds for the textual and the visual sources humanities researchers work with, so they too will have to rely on information technology to maintain a grip on their material.

The sheer size of the corpus of raw data is overwhelming and leads to a certain level of opacity. That size is a fact of life and we simply have to deal with that. However, the size is not the main concern of researchers who try to evaluate just how useful AI applications can be. In a sense we have all become used to that, and in everyday life we have stopped asking how search engines select and prioritise query results. We accept those results for what they are, and we no longer wonder what remains hidden from view.

However, in the context of research the classification used to transform raw data into information should be transparent and accessible for inspection.
As researchers we must know how raw data are processed, and how these massive corpora of source material are classified and made searchable; how, for example, scholarly comments about a painting like the Breughel are scraped from the internet, digested, reorganized and then read back to us by a chatbot…

As the selection and interpretation of sources is fundamental to the work of most of us in the humanities, we need to know how the content of digitized sources is classified and made accessible. This holds all the more strongly for the visual sources we consult, since textual descriptors are still an indispensable element of the metadata we need for their retrieval.

Every classification represents a certain worldview; every vocabulary is biased in some ways. So is Iconclass. Unlike the classifications applied by “BigTech”, however, it is published on the internet in its entirety, so it is available for public inspection and evaluation. Its raw data, moreover, are deposited in a public repository (Github) as open data, and anyone who sees shortcomings can suggest improvements and corrections.

Hence it seems like a good idea to start offering a course to disseminate knowledge about Iconclass and its potential for computer-assisted but transparent image analysis.
(If you agree to this, let us know … we have some ideas for a course that we would like to test)

Whatever the consequences are of AI chatbots entering the field of the Humanities - and there will be many - one thing is clear: if we want to be able to critically assess what they produce we must make sure that our sources are processed with the help of tools that are open and transparent, and that the community of the researchers using them, is able to modify and improve them.

I decided to pose a related question to chatGPT, using a slightly different metaphor. Ignoring the chatbot’s puzzling combination of the emblem and the saying by Seneca - it would be useless to ask for its “source”, as we have seen above - we can find some consolation in its “answer”.

How can we connect Alciato’s emblem “Mutuum auxilium” to the concept of artificial intelligence?

Alciato’s emblem “Mutuum auxilium” (Latin for “mutual aid”) depicts two people helping each other climb a mountain, with the motto “Non est ad astra mollis e terris via” (Latin for “The way from earth to the stars is not easy”). The image is meant to convey the idea that people can achieve more when they work together and support each other, rather than trying to go it alone.
One way to connect this emblem to the concept of artificial intelligence (AI) is to consider the ways in which AI can be used to augment and enhance human capabilities, rather than replacing or competing with them. Just as the two people in the emblem are able to overcome the challenges of climbing a mountain by working together and providing mutual aid, AI can be used to augment human abilities and help people achieve their goals more effectively.
For example, AI can be used to analyze large amounts of data and identify patterns and trends that would be difficult or impossible for humans to detect on their own. It can also be used to automate repetitive tasks, freeing up time and resources for people to focus on more complex and creative endeavors. By working together with AI and leveraging its capabilities, people can achieve more than they could on their own and overcome challenges that might otherwise seem insurmountable.

Here is an interesting tweet from a Librarian @Bekkhilde
(read the whole thread on the birdsite for more info)

" :mega:Today in “why are librarians & archivists important”: we had an enquiry about a book that I couldn’t find anywhere (not just in the Library, but even existing :face_with_raised_eyebrow:. So I asked the enquirer where their reference came from. They’d used #ChatGPT to ask about an 18thC ancestor…"