Skip to content

Trove periodicals

Current version: v1.0.0

There are hundreds of digitised periodicals in Trove (not including newspapers). Information about them is spread across a number of categories, and it's not always easy to find what's available. The notebooks in this repository help you harvest metadata, text, and images from digitised periodicals in Trove. There are also a number of pre-harvested datasets.

If you just want to explore the range of digitised periodicals available, a good place to start is this database of titles and issues.

See below for information on running these notebooks in a live computing environment. Or just take them for a spin using Binder.

ARDC Binder Binder

Notebooks

Harvesting metadata

Get details of periodicals from the /magazine/titles API endpoint

This notebook uses the /magazine/titles endpoint of the Trove API to get details of digitised periodical titles and issues. It then tries to fix some problems with the data by removing duplicates and Parliamentary Papers, and checking the lists of issues against those scraped from the Trove website.

Enrich the list of periodicals from the Trove API

This notebook tries to fix some problems with the periodicals data from the Trove API. It also enriches the harvested data by extracting additional information from the website. It creates two datasets – one for titles and one for issues – and loads these into an SQLite database for use with Datasette Lite.

Create a list of Trove's digital periodicals

This notebook creates a list of digitised periodicals in Trove by searching for the digital identifier string nla.obj and limiting the results to periodicals. Before the Trove API introduced the /magazine/titles endpoint, this was the only way to generate such a list. This method produces slightly different results to the new API endpoint, and it might be useful to compare the two to see what each method misses.

Harvesting text

Get OCRd text from a digitised journal in Trove

Many of the digitised periodicals available in Trove make OCRd text available for download. This notebook helps you download all the OCRd text from a single periodical – ­one text file for each issue.

Download the OCRd text for ALL the digitised periodicals in Trove!

This notebook helps you download all the OCRd text from all (or most of?) Trove's digitised periodicals, creating ­one text file for each issue. It also saves a CSV-formatted list of the issues in each periodical.

Create a database to search across each line of text in a series of volumes

The code here was used to create the NSW Post Office Directories search interface which helps you search across 54 volumes from 1886 to 1950. The same code, with minor modifications, could be used to index any publication where it would be useful to search by line (rather than Trove's default 'article') – for example, lists, directories and gazetteers – turning them into searchable databases.

Harvesting images

Get covers (or any other pages) from a digitised journal in Trove

This notebook shows how to download all the cover images from a specified periodical. With some minor modifications you could download any page, or range of pages.

Finding editorial cartoons in the Bulletin

This notebook describes a method for finding and saving full-page editorial cartoons from The Bulletin.

Datasets

OCRd text from Trove digitised journals

This dataset contains OCRd text and metadata harvested from digitised periodicals in Trove.

Editorial cartoons from The Bulletin, 1886 to 1952

This dataset includes a collection of 3,471 full-page editorial cartoons downloaded from issues of The Bulletin published between 1886 and 1952. In most cases there is one cartoon per issue. Metadata describing each image is available in a CSV-fromatted file, and in an SQLite database that can be explored using Datasette-Lite. The full collection of high-resolution images can be downloaded as a single 62gb zip file.

CSV formatted list of journals available from Trove in digital form

This dataset contains version records describing digitised periodicals found by searching for the digital identifier string nla.obj and limiting the results to periodicals. Duplicate records were merged.

Details of digitised periodicals from the /magazine/titles API endpoint

This dataset was created by checking, correcting, and enriching data about digitised periodicals obtained from the Trove API. Additional metadata describing periodical titles and issues was extracted from the Trove website and used to check the API results. Where titles were wrongly described as issues, and vice versa, the records were corrected. Additional descriptive metadata was also added into the records. Separate CSV formatted data files were created for titles and issues. Finally, the titles and issues data was loaded into an SQLite database for use with Datasette.

Run these notebooks

There are a number of different ways to use these notebooks. Binder is quickest and easiest, but it doesn't save your data. I've listed the options below from easiest to most complicated (requiring more technical knowledge).

Using ARDC Binder

Launch on ARDC Binder

Click on the button above to launch the notebooks in this repository using the ARDC Binder service. This is a free service available to researchers in Australian universities. You'll be asked to log in with your university credentials. Note that sessions will close if you stop using the notebooks, and no data will be preserved. Make sure you download any changed notebooks or harvested data that you want to save.

See Using ARDC Binder for more details.

Using Binder

Launch on Binder

Click on the button above to launch the notebooks in this repository using the Binder service (it might take a little while to load). This is a free service, but note that sessions will close if you stop using the notebooks, and no data will be saved. Make sure you download any changed notebooks or harvested data that you want to save.

See Using Binder for more details.

Using Reclaim Cloud

Launch on Reclaim Cloud

Reclaim Cloud is a paid hosting service, aimed particularly at supported digital scholarship in hte humanities. Unlike Binder, the environments you create on Reclaim Cloud will save your data – even if you switch them off! To run this repository on Reclaim Cloud for the first time:

  • Create a Reclaim Cloud account and log in.
  • Click on the button above to start the installation process.
  • A dialogue box will ask you to set a password, this is used to limit access to your Jupyter installation.
  • Sit back and wait for the installation to complete!
  • Once the installation is finished click on the 'Open in Browser' button of your newly created environment (note that you might need to wait a few minutes before everything is ready).

See Using Reclaim Cloud for more details.

Using the Nectar Cloud

Screenshot of GLAM Workbench application

The Nectar Research Cloud (part of the Australian Research Data Commons) provides cloud computing services to researchers in Australian and New Zealand universities. Any university-affiliated researcher can log on to Nectar and receive up to 6 months of free cloud computing time. And if you need more, you can apply for a specific project allocation.

The GLAM Workbench is available in the Nectar Cloud as a pre-configured application. This means you can get it up and going without worrying about the technical infrastructure – just fill in a few details and you're away! To create an instance of this repository in the Nectar Cloud:

  • Log in to the Nectar Dashboard using your university credentials.
  • From the Dashboard choose Applications -> Browse Local.
  • Enter 'GLAM' in the filter box and hit Enter, you should see the GLAM Workbench application.
  • Click on the GLAM Workbench application's Quick Deploy button.
  • Step through the various configuration options. Some options are only available if you have a dedicated project allocation.
  • When asked to select a GLAM Workbench repository, choose this repository from the dropdown list.
  • Complete the configuration and deploy your GLAM Workbench instance.
  • The url to access your instance will be displayed once it's ready. Click on the url!

See Using Nectar for more information.

Using Docker

You can use Docker to run a pre-built computing environment on your own computer. It will set up everything you need to run the notebooks in this repository. This is free, but requires more technical knowledge – you'll have to install Docker on your computer, and be able to use the command line.

  • Install Docker Desktop.
  • Create a new directory for this repository and open it from the command line.
  • From the command line, run the following command:
    docker run -p 8888:8888 --name trove-journals -v "$PWD":/home/jovyan/work quay.io/glamworkbench/trove-journals repo2docker-entrypoint jupyter lab --ip 0.0.0.0 --NotebookApp.token='' --LabApp.default_url='/lab/tree/index.ipynb'
    
  • It will take a while to download and configure the Docker image. Once it's ready you'll see a message saying that Jupyter Notebook is running.
  • Point your web browser to http://127.0.0.1:8888

See Using Docker for more details.

Setting up on your own computer

If you know your way around the command line and are comfortable installing software, you might want to set up your own computer to run these notebooks.

Assuming you have recent versions of Python and Git installed, the steps might be something like:

  • Create a virtual environment, eg: python -m venv trove-journals
  • Open the new directory" cd trove-journals
  • Activate the environment source bin/activate
  • Clone the repository: git clone https://github.com/GLAM-Workbench/trove-journals.git notebooks
  • Open the new notebooks directory: cd notebooks
  • Install the necessary Python packages: pip install -r requirements.txt
  • Run Jupyter: jupyter lab

See Getting started for more details.

Contributors

Cite as

Sherratt, Tim. (2022). GLAM-Workbench/trove-journals (version v1.0.0). Zenodo. https://doi.org/10.5281/zenodo.7039919