Version Control for Evolving Files

How often do your files evolve? For example, many researchers develop their protocols, paper drafts, code, slide decks, and analyses over time, going through many different versions of a file before settling on a final one (or even never finalizing a document!). This is a totally normal part of doing research but does make managing those files a bit challenging.

Thankfully, there is a solution to dealing with such ever-changing documents: version control. Version control allows you to keep track of how your files change over time, either by taking a snapshot of the whole document or making note of the differences between two versions of the same document. This makes it easy to go back and see what you did or even revert to an earlier version of the file if you made changes that didn’t work out (just think how helpful this would be for files like protocols!).

The good news is that there are several ways to accomplish version control, depending on your need and skills.

The Simple Way

The easiest way to implement version control is to periodically save a new version of the file. Each successive file should either have an updated version number or be labelled with the current date. Here’s how a set of versioned files might look:

  • mydata_v1.csv
  • mydata_v2.csv
  • mydata_v3.csv
  • mydata_FINAL.csv

I like using the designation “FINAL” to denote the very last version – it makes for easy scanning when you’re looking through your files.

Dates are also useful, especially if you don’t anticipate ever having a final version of the document. Don’t forget to use ISO 8601 here to format your dates!

  • myprotocol-20151213.txt
  • myprotocol-20160107.txt
  • myprotocol-20160325.txt

The downside of this system is that it takes up lots of hard drive space, as you’re keeping multiple copies of your file on hand.

The Robust Way

For something more robust than the simple solution described above, I recommend a version control system such as Git. These systems come out of computer science and track the small changes between files, therefore taking up less disk space for file tracking. While originally designed only for code, Git is now being used by researchers to track many types of files.

The downside of version control systems that they can have a high learning curve (though Git has a GUI version that is less onerous to learn than the command line). But once you get over the curve, version control software is a really powerful tool to add to your research arsenal as it handles most of the legwork of versioning and offers other cool features besides, like collaboration support. Here’s a list of resources to help you get started with Git.

(A side note for those familiar with GitHub: be aware that Git and GitHub are two different things. Git is the system that handles versioning while GitHub is an online repository that houses files. They are ideally designed to be used together though it’s definitely possible to use Git without GitHub.)

An Alternate Way

As a last resort, you can leverage the fact that some storage platforms have built-in versioning. For example, both Box and SpiderOak keep track of each version of a file uploaded onto their systems. This is not the best option for versioning, as it takes control out of your hands, but it’s better than nothing and is useful in a pinch.

Final Thoughts

I hope you will consider using one of these methods of version control for your files. Version control is downright handy for when you accidentally delete a portion of a file, are sending documents back and forth with collaborators, need to revert to an earlier version of a file, or want to be transparent about evolving protocols. So pick the system that works best for you and go for it!

Posted in digitalFiles | Leave a comment

These are a Few of My Favorite Tools

If you follow me on twitter, you may have noticed a small rant the other day about how much I dislike how Excel handles dates (it’s a proven fact that Excel is terrible with dates). My beef with Excel got me thinking more about the programs I actually like to use for data processing and clean up. These are tools that I’d only stop using if you paid me serious money. And maybe even not then.

Since I love these tools so much and since I’m mistress of my own blog, I’m going to spend this post proselytizing about them to you because I want you to love them too. So here are my top 4 you-might-not-know-about-them-but-really-should tools for hacking at your data. Some of them only do a small job, but they do it incredibly well and are just what you need in a specific instance.

 

Regular Expressions

Regular expressions (or regex) are a somewhat obscure little tool for search and replace but I’ve found nothing better for cleaning up text. Regular expressions work by pattern matching text and have a lot of flexibility. Need to find every phone number in a document? Want to clean up dates? Have to reformat a document while keeping the text the same? Regex is the tool for you. Regex isn’t a standalone program but rather plugs into other tools (including all of the tools below, as well as some programming languages). I recommend this tutorial for getting started.

Notepad++

While not as dazzling as the other players on this list, Notepad++ is my go-to text editor and the main platform I use for leveraging regular expressions. It’s always good to have an open source text editor around and this one is my particular workhorse.

OpenRefine

OpenRefine (formerly Google Refine) in an open source tool for cleaning up spreadsheet data. This tool allows you to dig into your data by faceting it across any number of variables. I find it particularly handy for generating counts; for example, it’s incredibly easy to find out how many times {variable1=X AND variable2=Y} versus {variable1=X AND variable2=Z}. Faceting also allows for editing of select subsets of data. You can also do stronger data manipulations, such as streamlining inconsistent spelling and formatting or breaking multi-component cells apart/collapsing columns together. I recommend this tutorial for getting started.

Bulk Rename Utility

Need to rename a number of files? Bulk Rename Utility is the tool for you! This software allows you to rename files in very specific ways such as: adding characters to the ending, removing characters from the name beginning, changing something at a specific position in the name, and much more. You can also add numbering and dates to file names or do a custom search and replace with regular expressions. I don’t use Bulk Rename Utility a lot, but it saves me a ton of time and energy when I do.

Posted in dataAnalysis | Leave a comment

Sustainability (aka. Passing the Hit-by-a-Bus Test)

I’m finally back at work after a three-month maternity leave and trying to catch up on everything that I missed while I was home with the little one. It’s going fairly smoothly, mostly because I was able to do a lot of planning before I left.

Having time to plan ahead and fairly strict deadline is definitely a benefit of taking maternity leave. But I’ve been thinking a lot recently about how this doesn’t always happen. For example, what happens if you suddenly get sick and can’t work for a while? In the worst case, your research could be retracted because you aren’t available to answer questions about the work.

All this has me thinking about sustainability. Basically, does your data pass the hit-by-a-bus test? I’ve had several conversations with my data peers on this topic in the last year and thought it worth exploring a little on the blog.

So how can you make sure your data lives on if you suddenly can’t work for a while? Or if you take a new job? Or if you actually get hit by a bus?

Documentation is probably the single most important piece of data sustainability. Not only should your notes be understandable to someone “trained in the art”, but it’s also a good idea to add some documentation to your digital files – I love README.txt‘s for this. You should document enough for someone (including your future self once you recover from the bus) to pick up exactly where you left off without taking weeks to decipher your work. And don’t forget about code and procedures.

There’s also a technical side to sustainability. Take file types, for example. Will your data live on outside of that weird software that only your lab uses? Making sure that your data is stored properly and well backed up also matters. Data shouldn’t be put on an external hard drive and forgotten.

Finally, ask yourself ‘what is the worst that can happen’? This will vary from researcher to researcher, but thinking about this question will let you do a little disaster planning. It might lead to training a coworker on taking care of your animals or taking extra precautions with your specialized equipment, whatever you need to do to make sure your work survives.

You may never get run over by that fictitious bus, but it can still be a useful exercise to think about sustainability. At the very least, you make your research more robust and easier to pick up if you need to go back to it in the future. At the worst, your research will be one less thing to worry about if things take a turn for the worst.

Posted in dataManagement | 1 Comment

The Art of Discarding

I’m in the process of spring cleaning my house and am getting lots of inspiration from Marie Kondo’s “The Life Changing Magic of Tidying Up.” Her big message is that to truly achieve an organized home, one must discard all unneeded/unloved items before you can even begin to tidy. We hold on to a lot of junk and it’s preventing us from enjoying and relaxing in our homes. Using the suggestions in Kondo’s book, the purging process is working really well for me and I’m already feeling better about a lot of my home spaces.

The act of cleaning my home by getting rid of unnecessary junk has me thinking about how underrated the discarding step is in the process of data management. It’s actually important to periodically get rid of useless data so that the good data is easier to find. Why wade through a bunch of files you’ll never use in order to locate the ones you want?

Besides clearing out the cruft, there are two other reasons to consider discarding data. First, junk data takes up hard drive space. I’m totally guilty of holding on to everything and anything digital – such as when I recently transferred all of my old laptop files to my new laptop – but this means I devote more and more disk space to stuff I don’t really need to keep. In the long term, it’s not a very sustainable solution.

The other good reason to discard is if you’re dealing with sensitive data. Sensitive data can be a pain to keep secure, but such security concerns go away after the data is destroyed. You can’t lose data that no longer exists! It’s usually best practice to destroy the data after a fixed retention period so you have access to it for some period of time but not forever.

In many ways, data management is comparable to tidying your home; one must keep things organized and put away in the proper place in order to find them later. This analogy continues for the discarding process. Discarding is an important step in keeping a handle on what you have. So as you manage your data, I hope you consider how strategically trashing files can help keep your digital house in order.

Posted in dataManagement, dataStorage | Leave a comment

Copyright and Data

In my quest to educate everyone on research data management, I’m always looking for easier ways to explain things. On the top of this list is copyright, which is weird to begin with and gets pretty squirrelly when applied to data.

My latest effort in this sphere comes as a flyer that covers that basics of research data and copyright. Hopefully, this will give you a better sense of how copyright does (and does not) apply to research data and how this affects what you can do with your and others’ data.

The flyer is CC-BY licensed, so you are free to use and reuse as you like so long as you attribute!

Posted in copyright | 4 Comments

Love Your Data

It’s Love Your Data Week, an effort coordinated by one of my amazing data librarian peers, Heather Coates. Love your data week celebrates how prevalent and important data is to research while also acknowledging that we need to give our data some love from time to time.

Each day has a theme and I encourage you to check out the resources on the main site and those appearing on Twitter under the #LYD16 hashtag. I’ve also identified some posts and videos I’ve created over the last few years relating to the 5 topics. Do check them out and think about ways to love your data a little more this week!

Monday – Keep Your Data Safe

Tuesday – Do You Know Where Your Data Is?

Wednesday – Write It Down!

Thursday – Give and Get Credit

Friday – Transforming, Extending, Reusing Data

Posted in Uncategorized | Leave a comment