Methods Hub (beta)

Linking Entities to Wikidata Database

Abstract:

The tutorial provides a pipeline to extract and link entities from any source of text

Type: Tutorial
Difficulty: INTERMEDIATE
Duration: 1 hour
License: Academic Free License v3.0

Learning Objectives

The tutorial at hand provides a pipeline to extract and annotate geographic locations from any source of text. By combining named entity recognition with automatically querying the Wikidata API it enables computational social scientists to smoothly analyse geographic references.

By the end of this tutorial, you will be able to:

  1. Know the basics of Wikidata and how to query it
  2. Understand how to link entities, in particular places, to their unique identifier (ID) in the Wikidata database
  3. Be able to implement steps to increase the quality of the linkage process

Target audience

This tutorial is aimed at beginners with some knowledge in R and basic understandings of API queries (no worries, we will do it together 😉).

Setting up the computational environment

The following R packages are required:

require(tidyverse)
require(WikidataQueryServiceR)
require(DT)
require(jsonlite)

Duration

Around 20 min

Social Science Usecase(s)

This method has been used in previous studies to evaluate how the legislatory system affects geographical representation (Birkenmaier et al., 2024).

Introduction

This tutorial guides computational social scientists by linking geographic entities mentioned in texts, specifically places, to their unique identifiers in Wikidata using R.

Know the Basics of Wikidata

Wikidata is a free and open knowledge base that provides structured data for entries such as places, people, and concepts, used by Wikimedia projects like Wikipedia and others under the CC0 public domain license. Each entry in Wikidata, referred to as an item, is assigned a unique identifier known as a QID.

For example, the city of Berlin is represented by the ID Q64, while the former German chancellor Angela Merkel is identified with the ID Q7174. These unique identifiers allow seamless open data integration into computational analyses, enriching studies with detailed and reliable metadata.

To get a better understanding of wikidata,

  • Entities: Consist of items (Q-items), each with a unique QID.
  • Properties: Attributes of items, such as location coordinates (P625).
  • Statements: Data on an item using properties and values.

The Wikidata Query Service allows users to execute complex queries on the data stored in Wikidata using SPARQL. Thus, all of the knowledge stored in wikidata can dynamically be extracted and saved using specific searches, including geographic locations, historical data, and much more.

To perform a query, users can access the Wikidata Query Service API or write and run SPARQL queries directly in the browser-based interface provided by the query service. In R, we can use the package WikidataQueryServiceR(Popov, 2020), a convenient wrapper for the Wikidata Query Service API. For instance, if we want to retrieve the list of the most populated countries in the world, we can execute the following query:

Query Explanation
  • Specify Output Variables:
    • The SELECT statement is used to specify the output variables. In our case we name them ?countryLabel for the country’s name and ?population for the population value. This statement is always defined before the {}, in which the instance and the entity is defined.
  • Filter by Instance:
    • In the first part of the query we filter for entities to include only those that contain the property wdt:P31 (“instance of”) for all countries (wd:Q6256) as “countries,” ensuring only relevant entities for our case are retrieved.
  • Retrieve Population Data:
    • The second part retrieves the value of interest, the population data for each country, by including the property wdt:P1082 (“population”).
  • Optional Clauses for Refinement:
    • Finally, we can leverage optional clauses or filters to refine the results further and might include a SERVICE wikibase:label statement to ensure labels are returned in a specified language (e.g., English).
library(WikidataQueryServiceR)

# Define the SPARQL query
query <- "
SELECT ?countryLabel ?population
WHERE {
  ?country wdt:P31 wd:Q6256. # Instance of country
  ?country wdt:P1082 ?population. # Population property
  SERVICE wikibase:label { bd:serviceParam wikibase:language 'en'. }
}
ORDER BY DESC(?population)
LIMIT 10
"

# Run the query
results <- query_wikidata(query)

# View the data
head(results)
# A tibble: 6 × 2
  countryLabel               population
  <chr>                           <dbl>
1 People's Republic of China 1442965000
2 India                      1326093247
3 United States               340110988
4 Indonesia                   275439000
5 Pakistan                    223773700
6 Brazil                      213421037

Data Linkage of Places

We start by processing the dataset (df_full) to extract and clean location names. The separate_rows() function splits multiple locations listed in one cell into separate rows, while additional transformations remove trailing dots and empty rows. The result is a clean list of location names ready for further processing. From the cleaned dataset (df_temp), we extract a unique list of location names using the unique() function. This ensures that only distinct location names are processed when creating the ID dataset. The list of search terms serves as the input for querying Wikidata.

# Extract and clean location names
df_temp <- df_full |> 
  separate_rows(locations, sep = ",\\s*") %>%
  mutate(locations = sub("\\.+$", "", locations)) %>%
  filter(locations != "")  # Remove empty rows

# Extract unique search terms for creating the ID dataset
search_terms <- unique(df_temp$locations)
search_terms
 [1] "Augsburg"            "Berlin"              "Nordrhein-Westfalen"
 [4] "Zappendorf"          "Munich"              "Hamburg"            
 [7] "Frankfurt"           "Dresden"             "Leipzig"            
[10] "London"              "Brussels"            "Paris"              
[13] "Cologne"             "Stuttgart"           "Düsseldorf"         
[16] "Bonn"                "Vienna"              "Zurich"             
[19] "Geneva"              "Amsterdam"          

We initialize an empty data frame (id_data) to store the mapping between search terms and their corresponding Wikidata IDs (QIDs). This dataset will be populated iteratively as we query Wikidata for each location name.

id_data <- data.frame('search_term' = character(), 
                      'QID' = character(), stringsAsFactors = FALSE)

The get_wikidata_id() function is designed to query Wikidata and fetch up to three relevant QIDs for each search term. At its core, it filters results to only geogprahic entities (e.g., locations with coordinates) and excludes irrelevant entities such as disambiguation pages (for a full explanation, see the callout box below). The returned QIDs are then concatenated into a comma-separated string.

Query Explanation

Entity Search Service:

  • The SERVICE wikibase:mwapi statement is used to perform a search for the search_term in the Wikidata database.
  • The mwapi:search parameter specifies the term being searched.
  • The mwapi:language parameter limits results to items with labels in the specified language (‘de’ for German).

Filter by Geographic Coordinates:

  • The query includes the condition ?item wdt:P625 ?coordinateLocation that only entities with geographic coordinates are returned. This ensures relevance to physical places.

Exclude Disambiguation Pages:

  • The MINUS {?item wdt:P31 wd:Q4167410.} clause removes disambiguation pages from the results, as these are not specific entities. This is a more conservative approach, avoiding false positives.

Group and Rank Results:

  • The GROUP BY ?item clause groups the results by unique items.
  • The COUNT(?sitelink) is used to rank entities based on the number of sitelinks (references to the entity across different Wikimedia projects), assuming entities with more sitelinks are more prominent (e.g., larger places).
  • The ORDER BY DESC(?sites) ranks results in descending order of sitelinks. This way we can always extract the first element to retrieve the place we are looking for, given the assumption that politicans will talk about the place within their constituency.

Limit Results:

  • The LIMIT 3 clause restricts the results to the top 3 most relevant entities.

Result Formatting:

  • The function extracts the Wikidata QID (last part of the URL) for each entity and combines them into a comma-separated string for easy use.”
# Function to fetch IDs from Wikidata and return as a comma-separated list
get_wikidata_id <- function(search_term) {
  query <- sprintf("
SELECT ?item (COUNT(?sitelink) AS ?sites) WHERE {
  SERVICE wikibase:mwapi {
    bd:serviceParam wikibase:api 'EntitySearch' .
    bd:serviceParam wikibase:endpoint 'www.wikidata.org' .
    bd:serviceParam mwapi:search '%s' .
    bd:serviceParam mwapi:language 'de' .
    ?item wikibase:apiOutputItem mwapi:item.
  }
  ?item wdt:P625 ?coordinateLocation. # Ensure it has geographic coordinates
  ?sitelink schema:about ?item.
  MINUS {?item wdt:P31 wd:Q4167410.} # Exclude disambiguation pages
} GROUP BY ?item ORDER BY DESC(?sites) LIMIT 3", search_term)
  
  result <- query_wikidata(query)
  
  if (nrow(result) > 0) {
    # Extract the last part of each URL and concatenate into a comma-separated list
    qid_list <- paste(sub(".*/", "", result$item), collapse = ", ")
    return(qid_list)
  } else {
    return(NA)
  }
}

In the matching step, we then query each search term, extract the IDs, and updates the id_data dataset with the corresponding QIDs. The tryCatch function ensures the process continues even if an error occurs for a specific term. Additionally, a Sys.sleep pause is included between queries to prevent overloading the Wikidata API, making the process robust and efficient for large datasets.

#Fetch IDs for the first term
for (term in search_terms[1:3]) {
  print(paste("Processing:", term))
  tryCatch({
    qid <- get_wikidata_id(term)
    id_data <- rbind(id_data, data.frame('search_term' = term, 'QID' = qid, stringsAsFactors = FALSE))
  }, error = function(e) {
    message("Error processing term: ", term)
  })
  Sys.sleep(1)  # Pause to prevent API rate limiting
}
[1] "Processing: Augsburg"
[1] "Processing: Berlin"
[1] "Processing: Nordrhein-Westfalen"
id_data
          search_term                   QID
1            Augsburg Q2749, Q10415, Q10414
2              Berlin  Q64, Q152087, Q56036
3 Nordrhein-Westfalen        Q1198, Q320642

Thus, we get a list of potential matches (ranked by the number of sitelinks). Lets inspect the output for our first input: “Augsburg”

Search Term Most Likely Match Actual Place QID and Link
Augsburg 1 City of Augsburg (correct match ✅) Q2749
Augsburg 2 Aichach-Friedberg Q10415
Augsburg 3 District Augsburg (Landkreis) Q10414

Quality Controls

Filtering using Wikidata Properties

Wikidata’s P31 property (“instance of”) allows us to filter entities based on their type, such as cities, administrative regions, or cultural landmarks. Using this property ensures that only entities matching the desired classifications are retained.

Code Explanation: fetch_wikidata_properties

How It Works:

  1. Wikidata API Integration:
    • The function interacts with the Wikidata API endpoint https://www.wikidata.org/w/api.php to query entity data.
  2. Input Parameters:
    • Takes a vector of QIDs (q_ids) representing the unique Wikidata entity identifiers.
  3. Initialization:
    • Creates an empty tibble (properties_data) to store the results, with columns for QID and the extracted properties.
  4. Iterative Fetching:
    • For each QID in the input:
      • Constructs query parameters (action = 'wbgetentities', format = 'json', ids = q_id, language = 'de').
      • Sends a GET request to the API.
      • Checks if the response is successful (status_code == 200).
  5. Data Extraction:
    • Extracts property P31 values (e.g., “instance of”) from the JSON response.
    • Combines the property values into a comma-separated string.
  6. Error Handling:
    • If the API call fails, the function logs an error message with the QID and the status code. This helps to debug what went wrong.
  7. Return Value:
    • Returns a tibble with two columns:
      • q_id: The Wikidata QID.
      • properties: A comma-separated string of property values.
# Add a column to identify valid instances
fetch_wikidata_properties <- function(q_ids) {
  entity_url <- 'https://www.wikidata.org/w/api.php'
  
  # Initialize a dataframe to store results
  properties_data <- tibble(q_id = character(), properties = character())
  
  for (q_id in q_ids) {
    print(paste("Processing:", q_id))
    
    params <- list(
      action = 'wbgetentities',
      format = 'json',
      ids = q_id,
      language = 'de'
    )
    
    response <- httr::GET(entity_url, query = params)
    
    if (response$status_code == 200) {
      data <- fromJSON(rawToChar(response$content))
      
      # Extract all instances (P31) as a comma-separated string
      instances <- data$entities[[q_id]]$claims$P31$mainsnak$datavalue$value$id
      instances_list <- paste(instances, collapse = ", ")
      
      # Add to the results dataframe
      properties_data <- bind_rows(
        properties_data,
        tibble(q_id = q_id, properties = instances_list)
      )
    } else {
      print(paste("Failed to fetch data for", q_id, ". Status code:", response$status_code))
    }
  }
  
  return(properties_data)
}

We can then apply the function to extract the properties for each QID.

q_ids <- id_data |> 
  separate_rows(QID, sep = ",\\s*") |>  # Separate multiple QIDs into individual rows
  pull(QID)  # Extract the QID column as a vector for processing

# Fetch properties for the QIDs
properties_data <- bind_rows(lapply(q_ids, fetch_wikidata_properties))  # Apply the function to each QID and combine results
[1] "Processing: Q2749"
[1] "Processing: Q10415"
[1] "Processing: Q10414"
[1] "Processing: Q64"
[1] "Processing: Q152087"
[1] "Processing: Q56036"
[1] "Processing: Q1198"
[1] "Processing: Q320642"
# View the resulting dataframe
print(properties_data)
# A tibble: 8 × 2
  q_id    properties                                                            
  <chr>   <chr>                                                                 
1 Q2749   Q1549591, Q1187811, Q1547289, Q253030, Q42744322, Q19953632, Q8563189…
2 Q10415  Q85482556                                                             
3 Q10414  Q85482556                                                             
4 Q64     Q1901835, Q200250, Q1307779, Q15974307, Q42744322, Q133442, Q11440198…
5 Q152087 Q1767829, Q62078547, Q45400320, Q115427560                            
6 Q56036  Q171441, Q15239622                                                    
7 Q1198   Q1221156                                                              
8 Q320642 Q414147                                                               

We can see that the item Q2749 (the city of Augsburg) has a lot of properties, such as

  • Q1549591: Indicates that it is a city or urban area.
  • Q1187811: Indicates that it is part of a specific municipality.
  • Q253030: Relates to the concept of a settlement or locality.
  • Q42744322: Suggests it is part of a specific administrative region.

These properties help us classify and refine the dataset by filtering items based on their relevance and characteristics. For example, we can exclude properties such as Q85482556 (rural district of Bavaria) to ensure that only relevant local entities in the study of places, like cities or urban areas, remain (of course, this depends on the reserach question at hand, and can be adapted flexibly).

# Define properties to exclude
exclude_properties <- c("Q85482556")  # Example: Remove rural district of Bavaria

# Merge id_data with properties_data
merged_data <- id_data |> 
  separate_rows(QID, sep = ",\\s*") |>  # Expand QIDs
  left_join(properties_data, by = c("QID" = "q_id"))  # Merge with properties_data

# Filter out items containing excluded properties
filtered_data <- merged_data |> 
  rowwise() |> 
  mutate(
    # Check if any of the excluded properties exist in the `properties` column
    has_excluded_properties = any(strsplit(properties, ",\\s*")[[1]] %in% exclude_properties)
  ) |> 
  filter(!has_excluded_properties) |>  # Remove rows with excluded properties
  ungroup()

# Pick the first QID for each search term
final_data <- filtered_data |> 
  group_by(search_term) |> 
 slice(1) |>  # Select the first row for each search term
  ungroup()
# View the final result
print(final_data)
# A tibble: 3 × 4
  search_term         QID   properties                    has_excluded_propert…¹
  <chr>               <chr> <chr>                         <lgl>                 
1 Augsburg            Q2749 Q1549591, Q1187811, Q1547289… FALSE                 
2 Berlin              Q64   Q1901835, Q200250, Q1307779,… FALSE                 
3 Nordrhein-Westfalen Q1198 Q1221156                      FALSE                 
# ℹ abbreviated name: ¹​has_excluded_properties

Outlook

With access to the properties and items for each entry, further checks and refinements become possible. For example, we can extract geographic coordinates for each location and retain only those matches that meet specific criteria, such as a minimum size or those that can be matched to a predefined shapefile. This enables a more precise analysis tailored to the researcher’s goals. In the study by (Birkenmaier et al., 2024), for instance, places were matched by the constituencies of German politicians, differentiating whether the locations fell within or outside a politician’s electoral district. This type of filtering allows for a focused analysis of geographic relevance and representation.

In the next step, such enriched geographic data can be combined with textual information to evaluate how places are framed in narratives. This could involve examining how regions are described in speeches, policy documents, or media reports, providing insights into the context and discourse surrounding specific locations. By linking geographic and textual data, this approach supports a more holistic understanding of regional dynamics, opening avenues for studying patterns, trends, and biases in how places are represented.

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).

References

Birkenmaier, L., Wurthmann, C., & Sältzer, M. (2024). Political Geography Replication Materials. https://osf.io/f582x/?view_only=e8f630e7775447bda007ba10755c29b1
Popov, M. (2020). WikidataQueryServiceR: API client library for ’wikidata query service’. https://CRAN.R-project.org/package=WikidataQueryServiceR