Tag Archives: content

Content strategy, service design and physical objects

I had the opportunity to begin learning about content strategy in the last two months or so.

I’ll probably have more to say on how this changed my perspective in thinking about a lot of things in another post. Sarah Wachter-Boettcher’s book  Content everywhere: strategy and structure for future-ready content got me started and Jonathon Colman’s Epic List of Content Strategy Resources pointed me to more great resources on the topic.

I might have stumbled across the term content strategy in Milan Guenther’s book Intersection: how enterprise design bridges the gap between business, technology and people. The enterprise design framework uses 20 aspects (organised in 5 layers) of design work in an enterprise context. The second layer, anatomy, includes these aspects: actors, touchpoints, services, content.

I was happy to find a strong element of service design in the framework, but thought emphasising the content aspect odd. Well, maybe for mostly digital services that made sense… But what about (physical) evidence? But then I’ve had issues with overemphasising service evidence, too.

By now I see the point in discussing content (strategy) at this layer in the framework, but I still feel uneasy about the (state of the discussion) of physical objects in service design. (Or am I just not reading the right stuff or talking to the right people?) Thinking of physical objects (including goods, physical products) as vehicles for provisioning services (as discussed by Dave Gray, among others) seems promising. Physical objects can certainly also be vehicles for delivering content. (We could also view content delivery as a type of services.) And then there’s a role for physical objects in a service evidence context (in a narrow sense, please).

Is it time to bring these thoughts together and elevate the discussion of physical objects in service design?

Updates
2014-09-21: Tom Graves has written a brilliant post titled From Product To Service.

Digital experiences: journey first or content first?

Both. Sort of.

Recently the topic of whether to take a journey-first or a content-first approach to delivering digital user experiences came up. The former seems to be favoured by more “traditional” user experience designers while the latter is favoured by many content strategist. No surprise here.

For now, my take is this:

I think I want to start with the customer experience in a conceptual, coarse-grained and probably channel-independent manner. A concept map (or a service ecology map, despite this grandstanding name) is a good basis from which to start mapping a customer journey.

Digging into more detail, increasing the focus on content seems useful, both in terms of detailing the content model and the actual content.

In turn, the content model as well as representative content elements can be an effective basis for designing the actual user journeys for a digital service.

Thoughts, please? Thanks.

Content strategy: a new chance for benefitting from domain modelling?

I have been interested in domain modelling for a long time. Analysis Patterns by Martin Fowler, Domain-Driven Design by Eric Evans and Streamlined Object Modeling by Jill Nicola, Mark Mayfield & Mike Abney greatly influenced my thinking and (some of) my work.

(The fundamental concepts described in Streamlined Object Modeling might actually be some of the most under-appreciated ideas in software development and information modelling.)

While I understood the benefits of good domain models early on, I can only recall one project incorporating a domain model into its software. Even when technology was able to effectively support implementing domain models, many developers seemed happy to read and write data structures to persistent storage manually and to manipulate these data structures with imperative code. To project managers, these things were probably too abstract, too invisible and too far removed from the myriad of immediate concerns they had to deal with in parallel.

And, of course, I probably didn’t make my point as well as I could have.

I was intrigued when I learned that content strategist had discovered domain modelling (and in particular Eric Evans’ work) for their purposes. Content strategists, if you read this, go and read Analysis Patterns and Streamlined Object Modeling, too — I’ll still be here when you’re done.

As I’m learning about content strategy and content management systems, I get a hunch (hope?) that this might actually be another chance to bring the benefits of domain modelling to the enterprise. This might be another chance to benefit from structured, connected and annotated information, and achieving objectives by interpreting these connections and annotations rather than writing lots and lots of imperative statements in code, process charts, rule lists or, for some of us, PowerPoint and Excel.

Content is likely to be much more tangible and immediately accessible to stakeholders than domain models ever where — as almost everyone has an opinion as to what needs to happen on the corporate website, I’m confident many stakeholders can be nudged into having an interest in content.

Let’s see how far I get this time…

Competing concerns in information modelling

This has been a recurring theme in my work so I figured I’d write about it:

Information models (domain models, object models, data models, content models) are typically subject to many different forces influencing their designs, and some of these forces can act in opposing directions. Some of these forces are specific to the problem at hand and its context while others are more generic and keep showing up in my work.

This post is about some of these more generic forces. (Or maybe its only about two potentially useful approaches.)

Avoiding redundancy vs. ease-of-use: Avoiding redundancy pushes towards fine-grained models in which classes and instances can be (re-) used in different contexts. A fine-grained model can be difficult to understand and may be difficult to use for developers, application/information managers and end-users. Ease-of-use pushes towards coarse-grained models which may be easier to understand but have a higher risk of inconsistencies if data is kept redundantly.

Instance-based vs. class-based differentiation: Class-based differentiation introduces different classes (and often inheritance hierarchies) to models in order to represent specific concepts. A high number of different classes can make a model unwieldy, difficult to understand and difficult to use, especially when the intent of and differences between classes are not described well. In contrast, instance-based differentiation represents specific concepts through instances of generic classes. In order to so, the model often has to introduce additional classes, e.g. for type, state or group objects. The resulting model typically has a simpler fundamental structure (fewer classes for core elements), but a necessarily higher level of abstraction can also make the model difficult to understand.

A few thoughts occured to me in this context:

Information access and modification are different concerns and might warrant different approaches: Command-Query Responsibility Segregation is one approach that might help here.

Information access & modification by administrators, developers and end-users are different concerns and might warrant different approaches:

In my experience (yours will vary), software systems tend to structure data according to development/runtime concerns, with some allowance being made for end-user concerns. Administration concerns tend to get little attention. Interestingly, this seems to be somewhat different for content management systems, likely because content managers are an essential end-user group in this context. CMSs are built to administer information in one structure and make it available in many different structures.

Could content management systems help address the different needs of administrators, developers and end-users? Even of administrators, developers and end-users of other integrated software systems?

And could this also have beneficial side effects with respect to automated testing, continuous integration & delivery, etc?

Function & information: a useful duality?

I’m much indebted to Jesse James Garrett and his book The Elements of User Experience. Much of my work had been in bespoke large-scale, enterprise software systems and this book provided an invaluable introduction to the field of user experience design.

Most importantly, Jesse’s book presented the field in a highly accessible and approachable way for software developers like me. (Alright, back then I was one…guess I don’t qualify anymore…)

Jesse achieved this by acknowledging what he calls “[the] basic duality in the nature of the Web”, i.e. acknowledging the web both “as a platform for functionality” as well “as an information medium” (Garrett, 2011, p. 27). (In the first edition, Jesse called this the web as software interface and the web as hypertext information space / hypertext system). This duality is one of the key organising ideas in Jesse’s elements diagram. (This is the diagram as of the first edition. In the second edition, “visual design” is called “sensory design”, “site objectives” are now “product objectives”, and the two sides of the duality are called “product as functionality” and “product as information”.) This diagram may well be the web’s Rosetta Stone: at least software developers and web designers now had the ability to clearly express what they fundamentally disagreed about.

The diagram (especially the one in the second edition) is great as it is. Perhaps I’d rename “functional specifications” to “functional requirements” to avoid the (in my perspective) arbitrary difference from “content requirements”, but that’s not a major problem.

But — you knew there was a “but” coming, right? — this diagram also made me realise how much this basic duality actually irks me. Thinking about it, this duality rears its ugly head everywhere (more or less openly): object state & behaviour in object-oriented analysis & design, business information & process models everywhere (e.g. TMForum’s SID and eTOM), data & application architectures in TOGAF…I’m sure you can easily think of more examples.

Staying with websites as an example, we realise that even content-centric (or information-centric) websites usually require sophisticated functionality, and functionality-centric websites (or web applications) can achieve little without information or content to work on. The diagram is easy to “fix”: remove the divider in the middle and stretch the different disciplines or methods across the entire width of the different panes. Done. (Note that Jesse’s diagram is explanatory rather than prescriptive: he didn’t propose thinking about websites from two unrelated perspectives.)

In the rest of this post, I really leave the web example and the points Jesse made in his book behind — so this is not a criticism of the book.

To me, there’s more to it: First, it increasingly seems ineffective to me to think of function and information as two (fairly) unrelated concerns — or even two only describe them in two (fairly) unrelated architectural views. This connects to Tom Graves’ discussion of how services act on and exchange assets. (Summary of and pointers to sources here.) Second, I’m somewhat uncomfortable with the notion of information architecture (at least as derived from web information architecture): among other things, the term seems to give short shrift to the dynamic or interactive aspects of a product’s structure and it fails to adequately address the information modelling concerns arising from software, enterprise-IT and business architecture. (Also I’m not quite sure what’s up with the apparent rift between (some groups in) the information architecture and content strategy camps. And while we’re at it, how come information design and information architecture are considered to be so fundamentally different, almost unrelated aspects? Can we find another field of architecture/design that holds similar views?)

Trying to take a somewhat unifying view of information architecture / modelling / analysis / design, it seems to me that we need to consider information in at least two different contexts: First, and perhaps simpler, information is an asset that our products’ functions act upon and exchange. Second, we use information modelling techniques to understand and describe the context that we, our organisations, and our products & services live and operate in.

Even if we use similar techniques and methods, it seems useful to differentiate between the context and content of our services, interactions, etc.

And maybe it’s time to consider functions and information less of a duality and more of an interdependent, interwoven whole than we seem to have done in the past.

The end. Yeah, I know: it surprised me as well. I guess I have some more thinking to do here. More later, methinks.

Reference

Garrett, J.J. (2011) The elements of user experience: user-centered design for the web and beyond. 2nd ed. New Riders.

Concept of “service evidence” stretched too far?

Originally, the concept of physical evidence related to the idea of using physical objects to make intangible services more tangible to users and customers. Examples included the folded toilet paper (or paper sleeve on the toilet seat) in a hotel room and the framed degree certificates in, say, a lawyer’s office. Such evidence was especially relevant in the context of credence services.

This concept has rightly expanded to include less physical examples of service evidence such as an order confirmation email or a badge certifying a website’s trustworthiness as assessed by this or that organization.

Increasingly I notice the term evidence being applied to all artefacts (physical or not) a service user is exposed to in a service context. This includes the entire servicescape as well as any (digital) content provided to the user.

Does this latter aspect not stretch the concept of service evidence too far? Are we not straying too far from its original purpose?

I acknowledge that all such artefacts can impact the service experience and thus should be selected and used in a considered manner. But would it not be useful to distinguish between artefacts (primarily) used for the purpose of evidencing and those that are not? If so, what should we call those non-evidence artefacts?

These are not rethoric questions…I’d really appreciate your thoughts on this. “Who cares?” and “Live with it!” might be useful answers if briefly explained.

Updates
2014-09-21: Tom Graves has written a brilliant post titled From Product To Service.