This project started off as a research task to answer some simple questions: how do designers use specimens to make decisions about typefaces? How do they use them to decide between one typeface and another? And to what extent do they help or hinder that process?
Specimens also deliver against other project needs for typefaces. They are used as sales collateral, as collectables, as special, precious artefacts. They are used as catalogues. As historian reference, and as technical specifications. They are used as vehicles for educating the market to new technology like OpenType and variable fonts. Within that mix of objects, where do digital and printed specimens fit as a primary tool of evaluation? The following outlines what I’ve learnt so far.
If you’re short of time read Tl;dr below, where I’ve outlined key themes with links to the sections if you want more detail.
Specimens are used by designers to assess the quality and suitability of a typeface to their project needs. Digital specimens are used differently to printed or PDF specimens, the latter being used as reference for more detailed information.
The most important tasks a designer has, when using a specimen, map to their evaluation behaviour. Firstly, there is a very fast, difficult to articulate, visual evaluation of the suitability of the typeface to their needs. Then, quickly moving to more pragmatic information gathering such as price and language support. Following this, they move to a cycle of evaluation using tools such as trial fonts, type testers, screenshots and applying the typeface directly to their designs to really pressure-test the typeface. It’s at this point that a designer starts digging into the detail and suitability by looking at specific glyphs and downloading PDF specimens.
Specimens as collectablesDesigners appreciate printed type specimens. They collect them, archive them, occasionally pulling them out of storage for inspiration for a particular project. Type designers appreciate printed type specimens for their archival qualities.
When discussing printed specimens with designers and foundry owners, I found their reaction was markedly different to that of discussing digital specimens. Printed specimen discussions were met with smiles, and animated, passionate points of view. When the discussion turned to digital specimens, the body language changed. There were frowns, and sighs. Pain was in the face of these foundry owners. This was a typical response and, upon probing, it was clear they felt the technology involved in a digital specimen – to try and reach the aesthetic heights of printed specimens, whilst delivering to user needs and the technical requirements of purchasing and fulfilment – was costly, constraining, and ultimately, unrewarding. For most foundry owners, digital specimens are a necessity not a pleasure to produce. Printed specimens are the opposite.
Specimens as evaluation toolsSpecimens form an important part of the evaluative mix – together with marketing collateral, social, and press releases. Type specimens can broadly by split into three types:
- Print Marketing/collectable.
- Digital microsite or part of a catalogue.
- Digital/Print: PDF specification.
All three have their place in evaluation. The digital specimen is often the second point of evaluation after the initial trigger/call to action. Typically this is a social media or blog post which links people directly to a specimen. Print specimens are reducing in their production and commercial impact. They can be costly in terms of initial investment. When they are produced, they are largely being produced as vanity pieces by foundries in collaboration with graphic designers. Printed specimens are distributed to existing customers as loyalty objects and they work very well in this regard.
As an evaluative tool, digital specimens vary greatly in their effectiveness. Many foundries do not have the resources or capability to measure the effectiveness of their specimens and how they help with a download or purchase decision.
Specimens as specificationIn the evaluation process, there is a point at which designers want detail. Detail on language support, licensing, opentype features, or implementation detail. Some designers want the depth of type specification and examples that was regularly presented in large type collections and libraries. This is where downloadable PDF specimens come in.
Type foundries and designers are largely already providing the right information in PDF specimens, although there is a degree of duplication – and wasted effort – across the industry for these specimens. Most of them follow established design and content patterns, yet every type designer and foundry is producing their own.
For those designers and font users who use them, PDF specimens are a useful addition to the evaluation mix. Post purchase, they also form an important part of archiving and categorisation with many designers filing them along with licensing information and usage rights.
How font users use type specimens
We’ll use the results from the Top Task survey to dig into more detail on this shortly. Briefly, font users use specimens in the following way:
- Immediate design evaluation: They quickly assess the suitability of the font to their project. You can think of this as a yes/no point in their evaluation and it happens in seconds. It’s worth noting that, throughout all of the interviews, designers had difficulty articulating this part of the process. Responses were ‘gut feel’, ‘a feeling’,
- Immediate pragmatic evaluation: They move onto practical, but quickly parsed, information gathering about pricing and language support.
- Hands-on design evaluation: If the font continues to meet their criteria, then move to hands-on evaluation. This means either using a font in a browser type tester, grabbing a screenshot of the output and putting it in design software of their choice, or by downloading a trial font. This part of the process is changed with number 2 when the typeface is part of an existing subscription service. If this is the case, the user story might go: ‘will this fit my project? Yep. Ok, download it’. And that’s the end of it. When a designer is parting with cash, or starting a procurement process, the emphasis on in-browser evaluation is higher.
- Detailed and specific evaluation: Following the font meeting the design and project needs, the final phase of this process is to dip into specifics for a font. A certain glyph, or looking at alternates.
A top task survey was conducted during July 2020. The first phase of the method is to collaboratively create the tasks. 80 people created tasks in a shared spreadsheet, before duplicates were removed and the list randomised. Then, ~300 people completed a survey where they ranked their top tasks. This process revealed the following top tasks (1 is most important):
- Does the font have the personality I’m after?
- Does this font fit my project?
- How much does it cost?
- What is the language support?
- Does it come in enough weights and styles?
- Can I test it out?
- What are the design highlights?
- Can I see all characters?
The bottom tasks were (1 is least important):
- Learn general knowledge about typography and type design.
- Where can I report bugs?
- Are there stylistic alternates?
- Are there smallcaps or figure alternates?
- Is bulk licensing available?
- What is the file size
- How popular is it?
- Test potential pairing
What does this tell us?
When reviewing many existing, published digital type specimens, there are notable exceptions to these lists. Many of them prioritise the following:
Information or design history of the typeface. The history, inspiration, and process undertaken by the foundry or type designer. Many type specimens prioritise this type of content for marketing purposes, but we can conclude from these results that this is not immediately useful for evaluation. This type of content could be more useful as supplementary and supportive material.
Technical information such as detailed licensing information The nitty gritty of licensing – which is very important to foundries, designers, and lawyers – is not important to users. To a point. Obviously, they need to understand the restrictions, but an up-front EULA, and laying bare all the technical information is not required.
Share with other people on my team This is a problem many subscription services were trying to fix. The transfer of licenses and assets is challenging – especially when web fonts are technically ‘attached’ to domains and user limits. Whilst it remains a challenge, it’s not a priority.
Contextual usage/examples or ‘fonts in use’ Examples of intended usage is something we see on many specimens. Creative, inspirational illustrations are useful as context building for people who are evaluating typefaces, but they do not feature in a primary task.
Is a variable font available? The industry is moving towards variable fonts. Despite the benefits and opportunities they bring, people are still either unaware of the format, or cannot make the connection with the benefits to their work. The mental model shift from a list of weights, to instances in a design space, is a big one and requires a lot of education.
How do the top tasks map to behaviour?
As outlined earlier, the top tasks map to our four stages of evaluation:
- Does the font have the personality I’m after?
- Does this font fit my project?
Practical considerations: 3. How much does it cost? 4. What is the language support? 5. Does it come in enough weights and styles?
Hands-on evaluation: 6. Can I test it out? 7. What are the design highlights?
Practical considerations: 8. Can I see all characters?
Based on what I’ve learnt I’m making these recommendations on how we might design more effective digital type specimens that cater to a user’s evaluation needs:
- Design the information architecture of digital specimens to map to top tasks.
- Deprioritise unimportant information shown in the bottom tasks.
- Design with interaction conventions where appropriate. For example, use conventional controls for type testers when the emphasis should be on supporting the user evaluating the typeface.
- Provide a categorised list of glyphs instead of a dump of the all of the glyphs in one table. It makes browsing and navigation easier.
- Make clear if certain non-conventional glyphs or attributes are default. For example, if tabular figures are default.
- Limit the controls available for the type tester to type size, variable axes, line height, and alignment.
- Include the ability to change colour contrast from light mode to dark mode to assess the quality of the typeface when reversed.
- If trial fonts are included, be clear as to what that means. Provide information on the limitations.
Questions, thoughts, or feedback?
I’d love to hear from you if you have any questions. Throughout the course of conducting this research and writing these insights, I’ve spoken to many designers, type designers, and foundries about what I’ve learnt and I’d love to talk to you further. Ping me on Twitter or contact me through this site.
Thanks to all who helped me out for this research. For Google Fonts for supporting it, to the many designers, developers, type designers, and foundry owners who gave their time for me to ask them stupid questions. Thanks to everyone who submitted top tasks, and who spent the 8 minutes completing the survey.
This isn’t the end of this research. It’s an ongoing project. The next stage has been for me to take these insights and design two specimen templates – one for digital, one for print. These will be designed to make it easy for type designers and foundries to build researched, and tested on real users, specimens which deliver to their needs. More news on that soon!
The methods used throughout this research were depth interviewing and a top task survey.
Depth interviewing. ~100 hours.
Participants: Broken down broadly into three groups: type designers, foundry owners, font users (not just designers, but developers and hobbyists).
Top task survey.
Participants sourced from social media, newsletter, and blog. The process was anonymous and no demographic information was gathered from participants.
This was a two part process: 1: Task creation. Participants were asked to submit their top tasks to a shared spreadsheet. 80 participants. 2: Survey. Participants were asked to rank their top tasks from the list created in part 1 of the process. 306 participants.