Wrapping Up A Project

Data managers talk a lot about doing data management before your project starts, but there is another important point in a project that is critical to data management: when a project ends. My recent post on managing thesis data got me thinking about this critical project point, along with a recent tweet from Robin Rice, Data Librarian at the University of Edinburgh, on what usually happens to data post-thesis:

While all project data are susceptible to such loss, thesis data are particularly fragile because data are often handed off to a PI when the student leaves the university. This puts someone who does not have much knowledge of the data in charge of caring for the data in the long term. The truth is that your PI will be much happier, and you will be happier with your own access in the long term, if you prep your data a bit before this hand off.

You have a distinctive opportunity to care for your data at the time when you are wrapping up a project. Not only is the data still fresh in your mind, but you probably already perform some management actions, like backing up your data and storing your notebook, when wrapping up a project. Adding a few simple steps to this process will let you enjoy the products of your work well after you finish the actual project.

Back Up Your Written Notes

People always think to back up their digital data (which you should definitely do) but few ever remember to back up written notes. This is a shame because data without the corresponding notes are often unusable. Not only does a backup copy address the possibility of a lost notebook, but it also helps the dissertators who hand over their notes at the end of their degree. If those researchers want access to their written notes after they leave their university, they must make a copy for themselves before the handover.

You can back up your notes by making physical photocopies, but any more I recommend digital scans. The benefit of scans is that you can store them directly alongside your digital data, which saves you from having to track down stray notes later. It does take time to scan a notebook, but the reward is ensuring access to your notes and maintaining the usefulness of data going forward.

Convert to Open File Formats

This is the one that has defeated me personally. Even though I have all my files from graduate school, most of my data is locked up in a proprietary format that I no longer have software to open. Don’t get stuck in the trap where you have your data but cannot read or use them!

If you haven’t done so already, wrapping up a project is a great time to convert files to an open format. Look for formats that are open, standardized, well-documented, and in wide use, such as: .csv, .tiff, .txt, .dbf, and .pdf. These formats can be opened by many programs, meaning lots of options for getting back your data when you need them.

If there isn’t a good open format for your data type, or you will lose important information during conversion, you’ll want to plan on how you’ll maintain access to the necessary software into the future. Realize that this option takes much more effort, so opt for open file format if you at all can.

Utilize “README.txt” Files

I cannot recommend “README.txt” files enough for making sense of digital files and file organization. These simple text files answer the very important questions of “What the heck am I looking at?” and “Where do I find X?” in your project file folders. This information is useful at every level of your project, from the main project folder on down to the folder containing sets of data. Plan to create one README.txt file per folder in as many folders as you can.

By their name alone, README.txt files announce that they are the first file to open when you or someone else is looking through your old data. Their job is to provide a map for exploring your files. For example, a top-level README.txt should give the general project information and a very coarse overview of file contents and locations. A low-level README.txt would be more specific as to what each file contains. These files need not be large, but their contents should provide a framework for easy navigation through your digital files and folders.

When wrapping up a project, you should create a README.txt file for at least your top-level folder and your most important project folders. This is doubly important if you are handing off your data for someone else to maintain, as good README’s make it exponentially easier for someone unfamiliar with the data figure out what’s what. Still, this system is useful to you, the data creator, in the event you come back to the data in the future.

Keep Everything Together

Finally, you will want to track down stray files and folders when you wrap up a project. It is much easier to manage all of your data if it is in one place (or two places if you have both physical and digital collections). Note that this does not include backups, which are separate and can exist offsite. Don’t forget to include things like reference libraries and relevant paper drafts in this pile; you want to save everything related to the project in the same place.

Once you have everything together, save it to an appropriate place and back it up. Keep track of your files and backups and move everything to new media every few years or so. You don’t want to be that researcher looking for Zip disk readers in 5 years. Remember that just because your project is complete, doesn’t mean that you can now ignore your data.

Final Thoughts

Researchers are often anxious to move onto the next thing when wrapping up a project, but you must resist the temptation to speed through the data preparation process. Taking an extra day to prepare your data properly can mean the difference between being able to use your data in 3 years and not having access to it at all. Between all of the time and effort you have invested in that data, and possibility that you may need it again in the future, it is worth taking a few extra steps to wrap up a project properly.

Posted in dataManagement, dataStorage | 1 Comment

The Declining Availability of Data

The journal Current Biology published a paper yesterday that proves what may be obvious to many of us: we’re really bad at keeping track of old data. Not only is it difficult to maintain data, particularly digital data, for many years but researchers are not trained in how to preserve our information. The result is a decay of data availability over time.

Data Availability Plot
Vines et al., The Availability of Research Data Declines Rapidly with Article Age, Current Biology (2014), http://dx.doi.org/10.1016/j.cub.2013.11.014

This decay not only hurts us, the original data producers, by limiting opportunities for our own data but it also hurts others in our field. The Nature commentary on the original article provides a great example of why this is, citing an ecologist who works with a plant studied 40 years ago by another scientist. Because the older data are now lost, the first ecologist cannot make any useful conclusions about the plant over the long term.

In the Nature commentary example, the original scientist is now dead but his data are still valuable, meaning that data are often assets to be cared for long after we are alive and need them. To address this, one scientist has suggested we develop scientific wills, of sorts, to identify datasets of value in the long term and who will care for them. No matter what, we need to start thinking about our data in the long term.

I’m not saying that every scientist needs to be an expert in digital preservation, but it does help to know the basics of keeping up with your data. Still, the best way to preserve data in the long term is by giving it to a preservation expert (aka. a data repository) to manage. This way, you don’t have to learn the ins and outs of preservation and you don’t have to worry about keeping track of the data yourself. It’s just what every scientist wants: a hands-off system that keeps track of your data while costing little to no money.

Data repositories come in two major flavors: disciplinary repositories run by an outside group and your local institutional repository run by your library. Either way, it’s their whole job to make sure that whatever is in their repository is available many years from now. I suggest starting with your local repository when looking for a home for your data, but be aware that many of these repositories were built for open access articles and cannot handle large datasets. In that case, consider one of the follow repositories:

These repositories make data openly available because many journals and fields are coming to expect data publication alongside article publication. Still, it’s possible to upload your data and embargo it for a short period of time, allowing you to keep working with the data but not worry about preserving it. The repository figshare even has a new private repository feature, which I think is pretty cool: it keeps your data private (and privately shareable) for any amount of time but lets you easily switch a dataset to public when you need to.

This list represents my repository highlights but there are obviously many more available, especially in biology. Ask around to find out if there is one your peers prefer, which will make your data more likely to be found and cited.

Finally, I will add that we will be seeing much more about data repositories going forward. Between journal and funder requirements to publish data and the recent White House OSTP memo pushing for even more data sharing, data repositories and data publications are only going to grow from here. If it means that we stop hemorrhaging data over time, I think that’s a very good thing.

Posted in dataManagement, digitalPreservation | 1 Comment

Save Your Thesis (and back it up too)

I remember being incredibly paranoid when I was writing my PhD thesis that my computer would crash and I would lose all of my files. After 5 long years of work, I did not want anything keeping me from finally graduating, lost dissertation and data included. Luckily, no such calamity befell me, but I did have a friend whose laptop was stolen in the middle of writing his thesis. He was forced to start over from scratch because he did not have a good backup copy. Sadly, this is not a unique occurrence.

It’s bad enough to deal with the stress of writing a thesis and worrying about moving on from school—you do not need the added paranoia about losing (or difficulty finding) important information on top of that. Thankfully, data management offers some practical tips that can keep your worries focused solely on writing the actual thesis.

 

Back up your files

One strategy that will save you a lot of thesis stress is having a good backup system. I recently wrote about “the Rule of 3”, and thesis time is a great opportunity to follow it. The rule basically says that you should have 3 copies of your files, two onsite and one offsite. If one of your onsite copies fails, you still have two copies to fall back on; this can reduce a lot of paranoia about losing your important files.

To further allay your fears, I recommend using automated backup systems and test restoring from them. Automation removes any work that you have to do beyond set up because, frankly, you have enough things to work on right now. Once you set up your backups, you should run through the procedure for getting your files back from the system. This ensures that you won’t be frantically searching for the restore procedure if you lose your main copy and that your backup system is actually working.

Finally, I will remind you about the hidden perils of cloud storage. In a way, cloud storage is great for thesis writing, especially if you want access from several locations. But you should definitely read your cloud storage service’s terms of service to be sure that they can’t do anything they want with your thesis files. You thesis is too important to store in cloud storage that doesn’t protect your content.

 

Organize your information

A small thing that will smooth out the writing process is organizing your thesis documents as you create them. First consider how you want arrange your thesis files. It may be logical to organize things by chapter or section, keeping separate folders for figures, data, references, etc. Pick a system that feels logical to you so you’ll know where to find everything when cross-referencing and assembling the final document.

In conjunction with having a good organization structure for your files, think about consistent file naming. Labeling written draft files differently than figures and tables, and drafts differently than final versions makes it easier to find and use information. You can also tell, at a glance, what is done and what you have yet to do.

Another practice I highly recommend is to version your drafts. This means regularly saving a draft to a new file with a new version number. For example, I might save my first chapter drafts as the files “Ch01_v01.docx”, “Ch01_v02.docx”, etc. with each consecutive version being a more complete draft. The final version of this chapter would be named “Ch01_FINAL.docx”.

Not only does versioning allow you to easily revert to an earlier version of your draft or recover from a corrupt file but it also helps you keep track of the most current copy. This last point is very important if you are writing your thesis on multiple computers; you need to know which is the most current copy so that you don’t repeat effort or have to deal with merging edits.

In the end, you want a clear workflow for where things will go and how they will be named. Taking a few minutes before you start your thesis to come up with a system and sticking with these workflows can save you time later when you are looking for that one particular file right before submission.

 

Manage your references

I cannot say enough about the value of a good citation manager while writing your thesis. You are going to be citing a lot of sources, so you want a system that both organizes your references and helps you format your actual citations. There are many options available to you—most notably Medeley, Zotero, Refworks, Endnote, and Papers—so pick one and run with it. Writing a thesis without a citation manager is just asking for more frustration and stress.

 

Think ahead

You should address all of the things mentioned in this post before you actually start writing. It will take a little time at the beginning, but once you have set up your backup systems, established your workflows, and chosen a citation manager, everything should fade into the background behind actually writing. That’s the whole point of data management—to build workflows that make it easier for you, in the long term, to do your work.

So take a few minutes at the beginning of the process to set things up. I can’t promise it will entirely relieve your stress, but at least you’ll be worried about your writing instead of losing your thesis.

Posted in dataManagement | 1 Comment

E-Lab Notebooks

I gave a talk on e-lab notebooks (ELNs) at UW-Madison yesterday. I cover the reasons for making (or not making) the switch to an ELN, what to look for in an ELN, and some things that Madison has done in this area. If you are unfamiliar with e-lab notebooks, this talk should provide you with a nice background in the technology.

In addition to the slides below, you can also watch a video of the talk here.

Posted in documentation, labNotebooks | Leave a comment

Rule of 3

Storage
http://www.flickr.com/photos/9246159@N06/599820538/ (CC BY-ND)

There is a saying about storage in the library world: lots of copies keep stuff safe. The abbreviation, LOCKSS, not only defines this principle but also provides the name of two storage systems, LOCKSS and CLOCKSS, which libraries buy into to add redundancy to their data storage. The idea behind the principle is that even if your local storage system fails, you still have access to your data.

LOCKSS is a great concept, but for everyday storage I boil it down to the ‘Rule of 3’. This rule of thumb says that you should keep 3 copies of your data, 2 onsite copies and 1 offsite copy. This is not only a good level of redundancy, but also a very achievable level of redundancy.

The third offsite copy is actually critical to the success of the Rule of 3. Many people keep their data and a backup copy on-site, but this doesn’t factor in scenarios where the building floods or burns down or a natural disaster occurs. One only has to look at universities recovering after hurricane Katrina or the Japan tsunami to see how devastating a natural disaster can be to research (among other things). Storing a copy of your data off-site can make the recovery process a bit easier if everything local is lost.

While the Rule of 3 speaks mainly to redundancy, I also see it as a recommendation for variety; mainly, that each copy should be on a different type of hardware. Usually, the first copy is on your computer, so options for the other copies include external hard drives, cloud storage, local server, CDs/DVDs, tape backup, etc. Each of these technologies has its own strengths and weaknesses, so you spread out your risk by not relying on one storage type.

For example, if you keep your data backed up off-site on commercial cloud storage, keeping an extra copy on a hard drive on-site means that the safety of your data is not based solely on the success of a business. Alternatively, tape backup is high quality but slow to recover from, but it’s a great option for the ‘if all else fails’ backup copy. The exact configuration of your backups will depend on the technology options available to you, but variety should be a factor when you choose your systems.

I personally love the Rule of 3 and follow it for my work information. For my data, I keep:

  1. a copy on my computer (onsite)
  2. a copy backed up weekly to the office shared drive (onsite)
  3. a copy backed up automatically to the cloud via SpiderOak (offsite)

The shared drive is the weak link in this chain, as I transfer files manually, but setting a weekly reminder in my calendar makes sure that I stay on top of things. Additionally, I would not use the office shared drive if I had security or privacy concerns with my data. Besides keeping my data in these 3 locations, I have practiced retrieving information from both backups so I know that they are working and how to restore my information if disaster strikes.

In the end, the Rule of 3 is simply an interpretation of the old expression, ‘don’t put all of your eggs in one basket.’ This applies not only to the number of copies of your data but also the technology upon which they are stored. With a little bit of planning, it is very easy to ensure that your data are backed up in way that dramatically reduces the risk of total loss.

Posted in dataStorage | 7 Comments

Open Access/Open Data

This week is Open Access week, a celebration that promotes and raises awareness for the growing Open Access movement. There are a lot of great reasons to publish open access, including making research openly available and shifting away from an unsustainable journal pricing model, but I want to focus my celebration of Open Access week on Open Data.

Open Access and Open Data are very different but they share common values: accessibility, transparency, ease of information reuse, a return on investment for public funding, and advancing research. While Open Access publishing has taken off in the last few years, especially with the success of open journals like PLOS ONE and faculty-led mandates like the one from Harvard, the efforts to open up our research data are still developing. For this reason, I think it’s important to take a moment during Open Access week to talk about Open Data

What is Open Data?

Open Data is the idea that research data should be made available upon the publication of a paper and as part of peer review. Data sheds light onto the research process in a way that can’t be done with an article alone. With stories of fraud and irreproducible research increasingly in the news, we need methods like Open Data for detecting these issues earlier.

Another reason for Open Data is that the value of data is increasing in the current funding climate. Between more access to data and new tools for analysis and mining, we are able to conduct research that simply wasn’t possible before. With shrinking research budgets, data are valuable research products that we can no longer afford to ignore.

Why should I make my data open?

A good reason for Open Data comes from a recent study in PeerJ that found a 9% average increase in citation rates for papers that had open datasets as compared to papers without shared data. The citation increase was upwards of 30% for the older papers sampled, suggesting that this citation effect increases over time.

Opening up research data also benefits us by being able to work with data that we did not have access to before. Not having to produce all of the data ourselves is great thing, but that data has to come from somewhere. We must be willing to provide useful data to others if we want access to useful data for ourselves.

What can I do about Open Data?

The first step is simply to understand why there is movement toward Open Data, even if you personally choose not to share data. The way we conduct research is changing and we need to know how to navigate those changes in order to be successful. Open Data is not going to universally happen overnight, but the ever increasing momentum in this direction means we need to stay informed of the why’s and where’s.

For those a little more comfortable with the idea of Open Data, consider sharing an old dataset or a negative/unpublishable study. This is a great way to get credit for information that you are not actively using and it will familiarize you with the data sharing systems. From there, you can share more datasets as you choose or as requested by funders/journals/readers.

As a librarian, I’m also spending this week letting people know about Open Data. This blog post is one of the ways I’m doing that but I have also hung up a poster in my library:

In keeping with the open data theme, the files are openly available (both PDF and Adobe Illustrator files) for you to use and remix. One person has already used the files to make a poster for their library and I would love to see more versions!

Happy Open Access week!

Posted in openAccess, openData | Leave a comment