Data Management Videos

I’ve been so busy talking about documentation on the blog recently that I’ve forgotten to share an awesome project that I’ve been working on: the data management video series!

Over the course of the last semester, I worked with an intern to create a series of 10 data management videos. The videos cover a range of topics and are all available on YouTube, so not only can you watch them whenever but you are also free to embed them on other webpages. I’m all for sharing content and, while these videos were predominantly made for researchers at my university, the more researchers who learn this stuff the better.

The full series list is as follows:

(As a geeky aside, I also want to point out that I’m wearing some of my favorite handmade items in a few of those videos. Keep an eye out for the epic sweater of awesome, the bad passwords dress, and the marvelous woman-in-science dress as you watch!)

These 10 videos are a solid start to work in this medium and I’m hoping that we can add more to this series over time!

Posted in dataManagement, video | Leave a comment

Taking Better Notes

I’ve been talking a lot about documentation on this blog over the last few months but there is definitely one more issue I need to address before we move onto other topics: taking better notes. Taking better notes is really at the heart of improving your documentation because this is the main way that researchers document their work.

To review, having sufficient documentation is central to making your data usable and reusable. If you don’t write things down, you’re likely to forget important details over time and not be able to interpret a dataset. This is most apparent for data that needs to be used a year or more after collection, but can also impact the usability of data you acquired last week. In short, you need to know the context of your research data – such as sample information, protocol used, collection method, etc. – in order to use it properly.

All of this context starts with the information you record while collecting data. And for most researchers, this means taking better notes.

Most scientists learn to take good notes in school, but it’s always worth having a refresher on this important skill. Good research notes are following:

  • Clear and concise
  • Legible
  • Well organized
  • Easy to follow
  • Reproducible by someone “skilled in the art”
  • Transparent

Basically, someone should be able pick up your notes and be able to tell what you did without asking you for more information.

The problem a lot of people run into is not recording enough information. If you read laboratory notebook guidelines (which were established to help prove patents), they actually say that you should record any and all information relating to you research in your notebook. That includes research ideas, data, when and where you spoke about your research, references to the literature, etc. The more you record in your notebook, the easier it is to follow your train of thought.

I would also recommend employing headers, tables, and any other tool that helps you avoid having a solid block of text. These methods can not only help you better organize your information, but make it easier for you to scan through everything later. And don’t forget to record the units on any measurements!

Overall, there is no silver bullet to make you notes better. Rather, you should focus on taking thorough notes and practice good note taking skills. It also helps to have another person look over your notes and give you feedback for clarity. Use whatever methods work best for you so long as you are taking complete notes.

Research notebooks have been used for hundreds of years. We can still refer to Michael Faraday’s meticulous notes or read Charles Darwin’s observations that lead to the theory of evolution. These documents show that handwritten research notes have been and will continue to be useful. But to get the most out of your research notes, you need to start by taking better notes.

I challenge you this month to think about your research notes and work to take clearer, more consistent, and more thorough notes. Your ultimate goal is to make sure you have all of the documentation you need for whenever you use your data.

Posted in documentation, labNotebooks | Leave a comment

Data or It Didn’t Happen

There’s a story in the news this week about the requested retraction of a study on changing people’s minds on same sex marriage. I always find it interesting when retraction stories are picked up by major news outlets, especially when the article’s data (or lack thereof) is central to the reasons for the retraction.

The likely retraction (currently an expression of concern) in question concerns a study published in Science last year looking at the effect of canvassing on changing people’s minds. Study participants took pre- and post-canvassing online surveys to judge the effect of canvassing on changing opinions. While the canvassing data appears to be real, it looks like the study’s first author, Michael LaCour, made up data for the online surveys.

The fact of the faked data is remarkable enough, but what particularly interests me is how it was discovered. Two graduate students at UC-Berkeley, David Broockman and Joshua Kalla, were interested in extending the study but had trouble reproducing the original study’s high response rate. Upon contacting the agency who supposedly conducted the surveys, they were told that the agency did not actually run or have knowledge of the pre- and post-tests. Evidence of misconduct mounted when Broockman and Kalla were able to access the original data from another researcher who posted it in compliance with a journal’s open data policy. They found anomalies once they started digging into the data.

In my work, I talk a lot about the Reinhart and Rogoff debacle from two years ago where a researcher gaining access to the article’s data led to the fall of one of the central papers supporting economic austerity practices. We’re seeing a similar result here with the LaCour study. But in this case, problems arose due to a common practice in research: using someone else’s study as a starting point for your own study. Building from previous work is a central part of research and bad studies have problematic downstream effects. Unfortunately, such studies aren’t easy to spot without digging into the data, which often isn’t available.

There’s an expression that goes “pictures or it didn’t happen,” suggesting that an event didn’t actually take place unless there is photographic proof. I think this expression needs to be coopted for research to be “data or it didn’t happen.”  Unless you can show me the data, how do I know that you actually did the research and did it correctly?

I’m not saying that all research is bad, just that we need regular access to data if we’re going to be able to do research well. We can’t build a house on a shaky foundation and without examining the foundation (data) in more detail, how will we find the problems or build the house well?

So next time you publish an article, share the data that support that article. Because remember, data or it didn’t happen.

Posted in openData, researchMisconduct | Leave a comment


In my last post, I discussed my philosophy on documentation in that most researchers need to take better notes and augment them with a few key types of documentation, as needed. I’ve already blogged about a few of these special documentation types – data dictionaries, README.txt files, and e-lab notebooks – but one structure we haven’t examined here is templates. Let’s correct that now.

Templates are one of my favorite recommendations for adding structure to research notes and making sure that you’ve recorded all of the necessary information. They coopt the benefits of a formal metadata schema – making documentation easy to search across, helping you record all essential information, and providing consistency – without all of the fiddliness or rigidity. This makes templates much easier to adopt and use.

So how do templates work? Basically, you sit down at the start of data collection and make a list of all the information that you have to record each time you acquire a particular dataset. Then you use this as a checklist whenever you collect that type of data. That’s it.

You can use templates as a worksheet or just keep a print out by your computer or in the front of your research notebook, whatever works best for you. Basically, you just want to have the template around to remind you of what to record about your data.

Let’s look at an example. When I was a practicing chemist, there were a few critical pieces of information I needed to record every time I ran an experiment. This list included the following:

  • Date
  • Experiment
  • Scan number
  • Laser beam powers
  • Laser beam wavelengths
  • Sample concentration
  • Calibration factors, like timing and beam size

Using this list as a template, I would then record the necessary information every time I did an experiment. The result might look something like the following:

  • 2010-06-05
  • UV pump/visible probe transient absorption spectroscopy
  • Scan #3
  • 5 mW UV, visible beam is too weak to measure accurately
  • 266 nm UV, ~400-1000 nm visible
  • 5 mMol trans-stilbene in hexane
  • UV beam is 4 microns, visible beam is 3 microns

Basically, the list is memory aid to make sure my notes include everything they should for any given experiment. And I could even use different templates for different types of experiments to be more thorough.

Remembering to record the necessary details is the biggest benefit of using a template, as this is an easy mistake to make in documentation. Templates can also help you sort through handwritten notes if you always put the same information in the same place on a notebook page. Basically, templates are a way to add consistency to often chaotic research notes.

I challenge you to try out a template or two and see if they help you record the better notes. Because, as I’ve said before, research data without documentation are useless and, honestly, having insufficient documentation can be just as frustrating. So make your data better by using a template!

Posted in documentation | Leave a comment

On Documentation

I just got back from my favorite conference on data, Research Data Access and Preservation (Storify highlights), and am processing all of the great things I learned about there. While some of these things will probably end up in future blog posts, I did want to share a bit on what I talked about during my own panel presentation which is relevant here.

The panel itself was entitled “Beyond Metadata” and I spoke about different methods for teaching documentation types other than metadata. I was particularly excited to be on this panel because I think that librarians’ love of metadata doesn’t always translate into what’s needed in the laboratory. So even though your funder may ask in a data management plan for the metadata schema you plan to use, most of the time that’s not the documentation type you really need.

My general philosophy on research documentation is as follows:

  • Most researchers don’t need formal metadata schemas, unless you have a big (time/size/collaborative) project to organize or are actively sharing your data.
  • Your first strategy for documentation should be to improve your research notes/lab notebook that you are likely already using.
  • That said, you can augment your notes strategically with documentation structures such as README.txt files, data dictionaries, and templates.

It’s actually this latter category of documentation types that you find me talking about a lot, as these are the ones that can really help but that many researchers do no know about.

There are plenty of good reasons to improve your documentation (including giving you the ability to reuse your own data, making sure you don’t lose important details, and being transparent for the sake of reproducibility), but we often don’t teach documentation to researchers beyond the basics. So here are a few resources I’ve created so you can learn to improve your documentation:

Looking over this list, I realize that there are a few gaps in the content of this blog when it comes to documentation practices. So look for future posts on templates and good note taking practices!

Research may yet get to the point where metadata is commonplace but we have many useful documentation structures to employ in the meantime. Research notes in particular have been used effectively for hundreds of years and will continue to be useful. In the end, you should use whatever documentation type that works well for you and ensures that you record the best information you can about your data.

Posted in documentation | 3 Comments

New Data Requirements and How To Meet Them

Around the time when I started this blog in 2013, the White House Office of Science and Technology Policy (OSTP) decreed that all major federal funders would soon have to require data management plans and data sharing from their grantees. It’s been almost two years since the OSTP memo came out, but we are finally starting to see the funder’s plans for enacting public access requirements.

The biggest recent announcement came from the NIH. NIH previously had a data sharing requirement for grants over $500,000 per year, but the new policy requires data management plans and data sharing from everyone. This matches the NSF policy on data, which will not change significantly under the new mandates.

In addition to NIH and NSF, other US funding agencies have new data policies. DOE, for example, now requires a data management plan with grant applications and data sharing from funded researchers. Similar requirements now exist for NASA, the CDC, and others. Basically, if you are getting research money from a US agency, you should now plan to write a data management plan and share your data.

So, given these new requirements, how do researchers meet them? In terms of data management plans, I’m pushing people at my university toward the DMPTool. The tool is regularly updated with new policy requirements/templates, contains helpful information for writing a plan, and has features that enable collaboration and review. It’s a great resource for anyone writing a data management plan for a US-based funding group.

The harder part is on the data sharing portion of the new data requirements. This is because a significant number of researchers will have to share data that were not required to do so before. Additionally, funders haven’t been very good about specifying where to share data. So we have a huge need to figure out where to put data and not a lot of recommendations on where that actually should be.

In terms of what I’m doing on my campus, I have three recommendations. First, look for where your funder, journal, or peers recommend you put data. This is likely the best place to put your data. Second, look for lists of data repositories by discipline. I particularly like this one from the new journal Scientific Data and the master repository list at re3data. Finally, you can always contact your local data librarian. I expect finding repositories for people’s data is going to be a big part of how my university is responding to these new requirements.

Overall, I’m very excited about these new requirements as I think that data management will really help researchers take care of their data and data sharing will promote transparency in research. Still, there is not a lot of infrastructure or support behind these new demands. This makes it difficult for both those who support research data and those who generate it.

The good news is that this is an evolving process and that, over time, systems and workflows will develop to make it easier to comply with these requirements. Things will get better. Until then, remember that you likely have assistance at your institutional library.

Posted in dataManagement, government, openData | Leave a comment