Annual Voltaire Foundation Lecture on Digital Enlightenment Studies: Mikko Tolonen on Books as Objects, Data and Meaning: A Computational Approach to Eighteenth-Century Book and Intellectual History In this lecture, I will present a framework developed by the Helsinki Computational History Group and implemented together with its partners for investigating eighteenth-century book and intellectual history through three interconnected lenses: Books as Objects, Books as Data and Books as Meaning.
We treat bibliographical detail as a key factor for understanding the flow of ideas, examining also the physical attributes of books including the ornaments embedded within them. Using a dedicated machine learning pipeline, we automatically extract and categorize these ornaments at scale from the ECCO corpus, then integrate the results with bibliographical metadata. This approach enables us to trace publishing practices and how books circulated and transformed within the broader distribution of intellectual traditions, thus shedding new light on the activities of publishers and printers, such as Jacob Tonson and John Watts.
We also approach Books as Data by leveraging computational methods for text reuse, translation mining and cross-lingual investigation for reception studies—particularly focusing on English, Scottish and French Enlightenment corpora drawn from ECCO and Gallica. These pipelines illuminate textual overlaps and uncover patterns of influence, offering insights into large-scale cultural and historical questions—for instance, the eighteenth-century reception of David Hume’s essays—and providing the means to reevaluate fundamental issues such as the boundaries of translations.
Finally, we turn to Books as Meaning by applying cutting-edge large-language models. Through “meaning matching”, we not only quantify textual overlaps but also track how semantic content evolves over time, capturing cultural shifts once considered out of reach for computational study. By combining physical features of the books, computational workflows and interpretive practices, this threefold perspective—object, data, meaning—expands our capacity to analyze and reconstruct the multifaceted history of books, authors and ideas in the eighteenth century.
All content for Voltaire Foundation is the property of Oxford University and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Annual Voltaire Foundation Lecture on Digital Enlightenment Studies: Mikko Tolonen on Books as Objects, Data and Meaning: A Computational Approach to Eighteenth-Century Book and Intellectual History In this lecture, I will present a framework developed by the Helsinki Computational History Group and implemented together with its partners for investigating eighteenth-century book and intellectual history through three interconnected lenses: Books as Objects, Books as Data and Books as Meaning.
We treat bibliographical detail as a key factor for understanding the flow of ideas, examining also the physical attributes of books including the ornaments embedded within them. Using a dedicated machine learning pipeline, we automatically extract and categorize these ornaments at scale from the ECCO corpus, then integrate the results with bibliographical metadata. This approach enables us to trace publishing practices and how books circulated and transformed within the broader distribution of intellectual traditions, thus shedding new light on the activities of publishers and printers, such as Jacob Tonson and John Watts.
We also approach Books as Data by leveraging computational methods for text reuse, translation mining and cross-lingual investigation for reception studies—particularly focusing on English, Scottish and French Enlightenment corpora drawn from ECCO and Gallica. These pipelines illuminate textual overlaps and uncover patterns of influence, offering insights into large-scale cultural and historical questions—for instance, the eighteenth-century reception of David Hume’s essays—and providing the means to reevaluate fundamental issues such as the boundaries of translations.
Finally, we turn to Books as Meaning by applying cutting-edge large-language models. Through “meaning matching”, we not only quantify textual overlaps but also track how semantic content evolves over time, capturing cultural shifts once considered out of reach for computational study. By combining physical features of the books, computational workflows and interpretive practices, this threefold perspective—object, data, meaning—expands our capacity to analyze and reconstruct the multifaceted history of books, authors and ideas in the eighteenth century.
Keith M Baker, professor of Early Modern European History at Stanford University, explains a Digital Humanities project mapping the debates on the constituent articles of the 1789 Declaration of the Rights of Man and of the Citizen. What happened to rights in 1789? I plan to present in this lecture some results of a collaborative research project exploring this question. Digital Humanities has done remarkable work to reveal the diffusion of texts, the circulation of letters, and the distribution of writers across enlightened Europe. In this regard, its model has tended toward the sociological and dispersive. What might be done, though, with a more political and concentrated approach that would try to digitize decisions and visualize moments of collective choice? What, more specifically, might we learn about the writing of the Declaration of the Rights of Man and of the Citizen, that portal to the modern political world? Methods of digital humanities aside, there are also good historiographical reasons for looking again at the week of debates in which the National Assembly fixed on that document. The project I will discuss was provoked most immediately by Jonathan Israel's claims that the principles of the French Revolution, particularly as expressed in August 1789 in the Declaration of the Rights of Man and of the Citizen, represented a victory for the group of intellectuals he gathers together under the banner of a Radical Enlightenment deriving its ideas and arguments ultimately from materialist philosophy. But it bears also on issues raised by new histories of human rights, for which the character of the Declaration of the Rights of Man and of the Citizen must be crucial for the question of continuity or rupture in the practice of rights talk.
Voltaire Foundation
Annual Voltaire Foundation Lecture on Digital Enlightenment Studies: Mikko Tolonen on Books as Objects, Data and Meaning: A Computational Approach to Eighteenth-Century Book and Intellectual History In this lecture, I will present a framework developed by the Helsinki Computational History Group and implemented together with its partners for investigating eighteenth-century book and intellectual history through three interconnected lenses: Books as Objects, Books as Data and Books as Meaning.
We treat bibliographical detail as a key factor for understanding the flow of ideas, examining also the physical attributes of books including the ornaments embedded within them. Using a dedicated machine learning pipeline, we automatically extract and categorize these ornaments at scale from the ECCO corpus, then integrate the results with bibliographical metadata. This approach enables us to trace publishing practices and how books circulated and transformed within the broader distribution of intellectual traditions, thus shedding new light on the activities of publishers and printers, such as Jacob Tonson and John Watts.
We also approach Books as Data by leveraging computational methods for text reuse, translation mining and cross-lingual investigation for reception studies—particularly focusing on English, Scottish and French Enlightenment corpora drawn from ECCO and Gallica. These pipelines illuminate textual overlaps and uncover patterns of influence, offering insights into large-scale cultural and historical questions—for instance, the eighteenth-century reception of David Hume’s essays—and providing the means to reevaluate fundamental issues such as the boundaries of translations.
Finally, we turn to Books as Meaning by applying cutting-edge large-language models. Through “meaning matching”, we not only quantify textual overlaps but also track how semantic content evolves over time, capturing cultural shifts once considered out of reach for computational study. By combining physical features of the books, computational workflows and interpretive practices, this threefold perspective—object, data, meaning—expands our capacity to analyze and reconstruct the multifaceted history of books, authors and ideas in the eighteenth century.