Be fair and comprehensive when conducting research
- brucewiebusch
- Oct 17, 2020
- 7 min read
I’m no Einstein. But I have read some of his writings about the importance of research. One of the pieces is a speech he gave at Max Planck’s sixtieth birthday party in 1918, before the Physical Society in Berlin. In this speech, Einstein talks about the principles of research. And when characterizing the work of a theoretic physicist, Einstein says, “It demands the highest possible standard of rigorous precision in the description of relations, such as only the use of mathematical language can give.”
I think that a similar attitude is needed for technical content developers. Einstein’s point is still relevant to technical content writers today. That is, be accurate and comprehensive in your research and interviewing. Avoid doing the minimum research needed. Go beyond what’s expected. Look at your research and subject matter from multiple perspectives. Consider mentioning the trade-offs associated with technology.
If you listen to pharmaceutical advertisements these days, you may notice the trade-offs, a mix of “good” and “bad” messages. The good messages are about the intended benefits of the medicines being advertised. But now, unlike previous decades, there are also bad messages in the advertisements about harmful side effects of these drugs. (e.g. “Side effects may include internal bleeding, muscle spasms, thoughts of suicide, etc.) Rather, the drug companies are forced to report in their advertisements the bad side effects of their drugs for the good of the viewers, specifically, the potential users of these pharmaceuticals.
Most advertisements or content produced today do not have mandatory requirements to include “bad” items in their content like the drug companies. Most companies get opportunities to choose what paths they are going down: editorial or sales, or otherwise. If your research and content are designed to push a sales agenda, you may have a more difficult time making your audience believe your content is trustworthy.
Even if you do attempt to be purely editorial and ethical in your research and reporting, it is possible that you could still inadvertently be less than complete or comprehensive in your content development. Some very complex technology and scientific concepts may only be comprehensible to a relatively few number of persons who have extensive training, education, or experience in a specific discipline, like Einstein and Planck had in physics. So even if you believe you have been pure, comprehensive, and ethical in your approach to a given topic, you may not know what you don’t know. Don’t be afraid to ask for help.
Innovation is often rooted in the scientific methodology. Like good scientists, good technical content editors have an openness to people who challenge their work. A willingness to answer tough questions, share information, and collaborate can add credibility to your content.
How you conduct your research is extremely important to your content. The questions you ask, how they are worded, the arrangement of the questions, and what questions are not part of the research are all examples of the planning and interviewing execution steps that can steer your research down one path or the other.
Technical reports are sometimes used by engineers to report on research conducted by engineers. They typically contain a cover or title page, an intro or abstract, a table of contents, and a summary or report of the results.
Some technical reports include an interpretation of the results and maybe even recommendations and suggestions based on the results. If you are writing a technical report as an editor, you would be careful to not narrowly interpret results. Or, you would be balanced in your approach, careful to not overemphasize some parts of the research that are favorable to you and de-emphasize parts of the research that are not favorable to you.
Just because you’ve been communicating almost all your life doesn’t mean you are an expert at communication.
And communicating longer or more frequently doesn’t necessarily mean that you have better experience than the next person. Some of the most talkative people have very little to say that would make good content. Conversely, some quiet people I know can say very much in very few words.
What is communicated in content is a reflection of the content creator and the questions this person asks. And what questions are not asked. If your organization is developing a story about the environmental impacts of hydraulic fracturing for oil, the writer may ask questions about emissions from generators and other equipment used at the oil well site. The writer might also ask about earthquakes and the disruption of natural habitats in the area surrounding an oil well. But if the writer doesn’t know to ask about the hundreds of thousands of gallons of clean water that goes down the well and returns contaminated and requires proper disposal, then this lack of questioning could leave a big hole in your environmental story.
The content creator’s mind set and attitude about the subject matter of the content plays a huge role in the outcome of the content and the messages it sends. If the writer looks at the subject in an “all or nothing” way, then the reader can end up with a distorted understanding of the content. So if an environmental story about hydraulic fracturing reports only on the negative impact of contaminated wastewater from wells, the reader could miss the positive environmental side of the story that details a lowering of harmful emissions from generators, better efficiency of equipment, or new methods of reducing wastewater by recycling it on site (which reduces total amount of water needed by 90%).
Likewise, content that overgeneralizes a topic does a dis-service to the content developer and the content reader or viewer. Stories that focus only on environmental impact of hydraulic fracturing may leave readers feeling negative about hydraulic fracturing because of negative consequences of this type of oil recovery process. To be balanced, the content developer should provide the broader perspectives about hydraulic fracturing. To be fair, the developer might also incorporate facts about how hydraulic fracturing has allowed some countries (e.g. U.S.) to reduce its dependence on foreign oil. Or, the developer could mention how hydraulic fracturing has lowered the cost of gasoline and other products derived from oil. Without necessarily realizing it, a writer can dwell on the negatives, ignoring the positives and impacting the message the viewers and readers get from the content. Or, more overtly, a writer can discount positive aspects of a story, overgeneralize, or just miss them. The fact that hydraulic fracturing in the U.S. helped change this country to a large exporter of oil is significant only if we know about it and look closely at what that change did to the U.S. economy or how it created jobs in places like Northeast Ohio.
Labeling is like overgeneralization. You identify a company mostly by its products. Sometimes, that’s ok. To label a company like Apple a technology leader in the computer or cell phone industry might be accurate if you can point to several patents in these areas to back up a statement about being a leader. And even then, a content creator might need to explain why the company is a leader in the specific technology area in which a company is referred to as a leader. Labelling can work the other way too. We might be inclined to label Exxon Mobil or BP an enemy of the environment after learning about the oil spills that were caused by those companies. But labeling, even if there is some truth to it, doesn’t tell the whole story. Labelling a company based on the actions of one or a few people or events is unfair and, often, inaccurate.
When the content concerns something happy or good, like a dog rescuing someone from a tragedy, communicating the story is not as challenging as a story that involves tragedy or conflict. Most people could communicate common emotional feelings about a dog that, say, barks and saves a child from a burning house. But communicating about subject matter that involves conflict, criticism, or controversy can be more difficult. Conflict, controversy, and criticism evoke emotions in people that tend to distract and distort the way information gets researched and reported. Writing a story about the presidential election of 2016 and a particular candidate’s view on a subject might be more difficult because of the writer’s emotion in response to the subject. If two writers were assigned a story about corporate tax cuts under President Trump, a business owner would write from a different perspective than a single parent on welfare trying to raise a family on $24,000 per year.
Jumping to conclusions and making assumptions are research death traps. They should be avoided when creating technical content. For example, it is dangerous to assume you understand what the readers or content consumers is thinking based on momentary trends. Don’t assume that everyone buys from a company for the same reasons.
When conducting quality and satisfaction surveys, I include direct quotations from customers in my reports. In addition to direct quotations, I ask them a series of questions and then score the results on a scale from one to five for each of ten key categories we monitor as key performance indicators. It has been tempting to omit survey results that convey negative information or whose scores were low enough to bring down overall results. But I include the true results nevertheless. Because that’s what people at the company I work for needed to see. Direct, first-hand results. Even if they are not flattering.
After I have assembled the survey results in a table and crunched all the numbers, I always go back and look at the results from several different angles. So, for example, I count the results of the biggest customers equally to the results of smaller customers, which on one level is fair. The problem with this approach is that it may not be representative of your work if only a few of your customers account for the majority of the work. If you process 99 of 100 units for your big customer, who gives you high marks on the survey, that is a very accurate and precise reflection of customer satisfaction. But if you survey one of your small customers, and this small customer gives you low marks on the one unit you processed for him, that would only be 1% of your actual output. It would be unfair to give the small customer’s survey results equal status with your large customer. Your small customer only represents 1 percent of your output. So sometimes two reports were produced. One for the biggest customers and one for everyone else. Both provide valuable information and work together to give you a better understanding of your customers.
Recommendations I make in the surveys are based on facts, direct quotations from customers, and things that can be independently validated and confirmed—like an email or a text.
Opinions? Not usually included in recommendations. But I wouldn’t say never include an opinion. But if you do, have something verifiable to back up that opinion.
More questions about research: information.
• What are the sources of information? And more importantly, why were they chosen?
• What research was done prior to interviewing them?
• What is your procedure, process, or methodology for gathering and researching background and factual information?
• Was input from customers included in content? Why or why not?

Comments