Slinger's Thoughts

November 4, 2013

My first time presenting at a SharePoint Saturday

Filed under: Community, Disaster Recovery, SharePoint — slingeronline @ 8:49 am

Well, I did it.  First one under the belt. I was chosen as a presenter for SharePoint Saturday Dallas (SPSDFW) and presented my session this past weekend. I had a blast, was a bundle of nerves, and overlooked some of the stuff in my notes. (I need to remember to keep to those a little closer next time.) My session wasn’t a sellout, or standing room only, but that’s OK.  It was a great learning experience, and my next presentation will be better. I will be submitting for SPSSA (San Antonio)/SPSAustin, and SPSHOU (Houston) this coming year, just to round out the Texas SharePoint Saturdays. I might throw Houston’s Techfest in there also, not sure. (Still contemplating HSPUG) In any case, I do have a slide deck to post here, and I will be posting a blog article about the main theme of my presentation in the next few weeks. In the meantime… slides.

Preparing for Disasters-SPSDFW

August 30, 2013

SharePoint Governance and Disaster Recovery

Filed under: Disaster Recovery, SharePoint — Tags: , , — slingeronline @ 12:25 pm

For those of us that talk about SharePoint Disaster Recovery, we often mention that any Disaster Recovery strategy needs to be part of an overall SharePoint Governance plan, but I don’t think that any of us have gone into any detail about what that governance plan should include.  If any bloggers have, then please reference their blog posts in addition to this.  For this blog post I am going to show what our disaster recovery plan is for a governance plan that I am trying to implement at our organization.  Please keep in mind that this is part of a much larger detailed governance plan for a SharePoint environment. 


June 28, 2013

Milton’s Job – Who is responsible for maintaining and managing backups

Filed under: Disaster Recovery, SharePoint — slingeronline @ 9:30 am

A while back a user I follow on twitter, Wendy Neal (@SharePointWendy), asked a question about who is responsible for backing up SharePoint.  Not something so existential as it needing to be “Dave, from Accounting,” but more of a roles type discussion. Is it the SharePoint Administrator’s role, the Server admin? She, Sean McDonough (@SPMcdonough), and I had a brief little discussion about it.  Granted it was mostly Sean and Wendy. This is because Sean is Sean (one of the authors of the SharePoint Disaster Recovery Guides).  It does bear repeating though, since the question was asked, and it fits nicely into my series of Blog posts on SharePoint Disaster Recovery.

So who is responsible?  If you don’t know this, then it is time to start asking some serious questions of the department that owns your company’s SharePoint installation.  The person responsible might turn out to be you.  It also might be “no one” which loosely translates into “you.”  The official answer is of course, “It depends.”  What aspect of your disaster recovery plan are you trying to cover?  In most larger organizations, the responsibility will be shared among several individuals;

  • Hardware – Hardware should be the responsibility of the server administrators.  This would be in the case of total catastrophic failures, so that you can restore from bare metal. 
  • Databases – SharePoint runs on many Databases. While digging into them is not recommend, or supported, entire databases can be backed up and restored with no ill effects other than some minor restore pains.
  • Content – SharePoint content pretty much falls to the SharePoint administrator. Granted cooperation with the Server and Database Administrators is a necessity, but ultimately, if a user cannot find or access their content, or something goes wrong with SharePoint, the end users are going to look to the SharePoint Administrator to both blame, and to solve their problems. 

If you are a SharePoint Administrator, congratulations!

Short Version; You are now responsible for ensuring content is safe and secure for your end users.  Fortunately, it isn’t that difficult to do, and I hope my last several blog posts have helped.

Long Version; For any world class disaster recovery scenario to work, all of the administrators need to work together to ensure that your disaster recovery plan can account for any level of disaster that may strike your farm.  If your database administrator has set up mirroring as a disaster recovery and/or high availability strategy, you need to make sure that your SharePoint farm is aware of it and configure it appropriately. Your database administrator also needs to know that some SharePoint databases don’t do mirroring well, if you have those particular service applications in your farm. Your server administrator needs to understand that SharePoint isn’t simply a piece of software that resides on a single machine, but is rather a complex combination of several servers that make up the whole farm. Not all Web Front Ends are the same, and your application servers, and WFEs are not interchangeable for DR purposes.  Taking a snapshot of one WFE in a farm that has ten probably isn’t going to help much after a disaster strikes, simply because of the subtle differences between them.  Granted, a WFE is probably easier to restore to a farm than an application server, but your server administrator needs to know this.  There is a good chance that the server admin and the db admin aren’t certain of the magic that makes a SharePoint farm work smoothly.  They may want to make changes that Microsoft has suggested is a bad idea, which incidentally, is a bad idea. As a SharePoint Admin, it is your responsibility to make sure that they are aware of this.  So why Milton?  Sometimes getting a server admin, or a db admin to understand what you are trying to tell them is difficult, especially with a technology that is as potentially complex as SharePoint.  There are times when you might feel like Milton from the movie Office Space, and no one is listening to you.  Do what you can with the tools you have, such as Central Administration, until you can get what you need.  You might also notice that when I laid out the different areas of a DR strategy, I used the word “should” a lot. Just because a particular admin “should” do something, doesn’t mean that they actually are.  It is probably not a bad idea for you to ensure that these things are happening.

May 14, 2013

High Heels are not Hammers – How do you create backups

Filed under: Disaster Recovery, SharePoint — slingeronline @ 9:00 am

If you’ve been playing along, I’ve talked about what, when, where, and why you should create backups.  One you have a DR plan, that includes what you need to backup, how often, and where it is stored, with plenty of documentation, we can finally get down to what tools you should use to perform your backups.  (If you bought the tools first, and then planned your strategy around what the tools could or could not do, please throw away your DR plan and start over.)  There are a myriad of tools out there to backup and restore content in SharePoint specifically, as well as on your servers in general.  Keep this in mind though; just because a tool can be used in a certain way, that doesn’t always mean it should.  Don’t compromise your DR strategy because a tool doesn’t cover it. Find the tool to fit your needs, don’t change your needs to fit the tool. Look at the “toolkit” pictured below; What happens if you need a Phillip’s Head Screwdriver? You obviously don’t have the right tool for that.



So what tools should you use?  Well if you have planned out your DR strategy in great detail, and included specific items that you absolutely must create backups of, and what you can reasonably get away with not having backed up, the choice should become fairly obvious. Get the least expensive that meets all of your requirements.  Do you need all of your SPD workflows backed up? What about workflow history? Do need versions of your content backed up? Permissions?  Customizations?  There is a lot to consider.  Your environment may have some stuff in it that some backup tools don’t handle very well, or even at all.  If you decide that using the built in Central Administration tool is all you need, great.  I hope you don’t have any custom web.config files.  And you might find yourself installing all of your 3rd party features again after a disaster strikes.  It would probably be better to get a tool that covers those instances.  You may also want to give certain users the ability to restore their own content, depending on your environment.  It all depends on what your DR strategy is. These are just a few examples of the thousands of things that need to be taken into consideration.  Now that you have this information, it is probably a good idea to go back and look at your DR strategy again.  If you see any cases of a reference to a specific piece of software in your DR plan, you did something wrong.  Your DR plan should be as generic as possible.  Got that? Your DR policy needs to be “specifically generic.” Be specific about the environment and what needs to be backed up, but generic about the tools you use to get there.  Once you have everything in your DR strategy figured out, hand off deciding which software to buy to someone else if you can. (It’s the easiest way to prevent bias.)  To get you started, here are some of the most popular 3rd party disaster recovery tools available, in no particular order. 

Each one has strengths and weaknesses. Make sure that what you do get, if your DR policy needs a 3rd party tool, meets all of your requirements. With whatever you do choose, test it to make sure meets your requirements.  If you find after testing that you are running into some roadblocks, read this

April 2, 2013

Backing up the backed up backups of your backups – Where do you keep your backup files.

Filed under: Disaster Recovery, SharePoint — slingeronline @ 2:49 pm

If you have been following along the past few posts, then you have a pretty good idea what you need to backup, and when you need to back it up.  So once you do have a governance policy in place that says what you need to backup, and when your backups need to run, you are set to go right?  Sure, but where are you going to keep the backup files?  I would like to say that this is a "no-brainer", but I have seen it many times where the backup files were stored on the very server that was backed up.  I will admit that I am guilty of it as well, although under very unique circumstances. (Those circumstances being that it was a testing environment and I didn’t really care if the entire farm went belly up, I was in a position to lose exactly no critical data at all.) So where do you keep those files? When deciding where to keep your backup files there will be several things to consider; you will need to weigh the importance of each aspect to decide which option is best for your organization.

Backup Latency

Backup Latency is how quickly a backup file can be stored to the selected location. Selecting a local drive to store the backup file will be extremely fast, while selecting to store your backup in the cloud will likely be significantly slower.


While storing a backup file locally is very quick, it is not very secure. If something happens to your server that makes it inaccessible, chances are your backup file is also inaccessible, which means it is useless.

Restore Latency

Restore Latency is how fast you can get the information out of your backup and into your environment. Although there is a close relation between Backup Latency and Restore Latency, they are not identical. Often you do not need to restore the entire contents of your backup, but rather a small portion of it. (I plan to address this in more detail in a future post.)

So where are the best places to store backup files? Below is a table* that lists some common locations and how they rank in my experience. Keep in mind that your mileage may vary.


Storage Backup Latency Safety Restore Latency
Cloud Low High Low
Tape Low Medium Low
File Share Medium Medium Medium
Portable Hard Drive Medium Low High
Local Drive High Low High

Many people will wonder why I said Tape only has medium safety. It can be very safe, if it is stored offsite. If you keep your tapes next to your server it isn’t much better than a shared drive in terms of safety. (Which makes it the one of the worst options to go with, unless you have a strict policy around tape management that will prevent the tapes from being stacked on top of the server rack.) The same can be said of the removable hard drive. Typically they are left plugged into the server and just stacked on top of the rack. If there is a strictly enforced policy that says the the portable drive must be kept off site unless it is being actively used in the backup or restore process, safety considerations go up.  The reason that this is important is because not every disaster is related to software or hardware failure. Sometimes environmental conditions are the cause of your disaster. If your server room floods and all of your backups are kept in the same room with your server, you didn’t really have a backup after all did you? You can tell that there are going to be other cost considerations as well. Convenience is one and the actual financial costs should always be a consideration. I’ve added those to the table* below.


Storage Backup Latency Safety Restore Latency Convenience Cost
Cloud Low High Low High High
Tape Low Medium Low Low Medium
File Share Medium Medium Medium High Medium
Portable Hard Drive Medium Low High Medium Medium
Local Drive High Low High High Low

You can tell that there isn’t really a best, “one size fits all” solution. Each has its advantages and disadvantages. It truly depends on what your needs are and what your organization can afford. Please note that there also is no real correlation between cost, safety, convenience, etc. I should also note that with Tape and Portable Hard Drives you can increase the safety, but it will also decrease the relative convenience. Keep all of this in mind and be sure to include it in your disaster recovery planning.  Cost can also increase with the amount of storage space required.   What do you want to keep in your backups? If you keep your SharePoint backups on the local drive and then through a different disaster recovery policy backup the entire server to the cloud, you just backed up your backup, and that uses storage space. Your backup will take a little longer, and use up a little more storage space.  This will eventually add up. Instead of backing up SharePoint to the local server and then backing up the local server to the cloud, including the backup, it probably makes more sense to backup SharePoint to the same location initially. This also means that in the event of a disaster you don’t have to restore the backup from the restored backup.  (If we keep this up we will end up in a cyclic redundancy loop, like having Doritos Locos Tacos flavored Doritos. Are we next going to have some Doritos Locos Tacos flavored Doritos Locos Tacos?)

*The values in the tables are not from any official source, but just from my personal experience. Your experience may vary greatly from mine. This is only meant as a guide to what you should consider when choosing a location to store your backup files.

March 29, 2013

OBD-II Diagnostics – When do you backup.

Filed under: Disaster Recovery, SharePoint — Tags: , , — slingeronline @ 7:56 am

In my last post I spoke about what needed to be backed up. In this one we will address something else that you need to consider in your disaster recovery plan – when.

Ideally you want to create a backup of your content when no one is using it.  Here’s part of why. If you start a backup at 8:00 am on Monday morning, you are probably going to have some unhappy users.  When you start a granular backup, using a 3rd party tool, or the built-in tools like Powershell or Central Administration, the first thing that SharePoint does is lock the Site Collection that is targeted in your backup.  (Doing full farm or database backups does not cause this behavior, but there are other limitations.) While the Site Collection is locked, the users can’t do anything but look at the content in SharePoint.  Anything that would change the contents of any list or library is stopped.  Workflows don’t start. Users cannot update list items or create new documents.  Everything grinds to a halt.  Suddenly SharePoint becomes more like a stuffy museum instead of a petting zoo. (SharePoint was meant to be more like a petting zoo.)  If this happens to your users they will probably not be happy. There is a way around this.  You can opt to not lock sites when you create your backup. This presents another problem that Sean McDonough went into here.

So how do you know when to backup your SharePoint content?  You can assume when no one will be using your SharePoint site and just create backups over the weekend, but this is only hit and miss, and you might be impacting more of your users than you originally thought.  This is why they make diagnostic and performance monitoring tools for SharePoint.

Think of diagnostic tools for SharePoint like the OBD-II sensors in your car. Your car has a myriad of sensors to monitor its performance; fuel sensors, air sensors, and so on. When something is amiss, your car will let you know by a little light on your dashboard. It may not tell you what is wrong, just that something is wrong. When you take the car to a mechanic however, they can pull a code and know exactly what is wrong and what needs to be repaired.  Without a diagnostic tool like this for SharePoint, you may know that something is wrong, but may not be able to determine what. And a diagnostic tool will also let you know if something is going to go wrong before it does, so that you can address it before your end users notice.

So how does a diagnostic tool fit into a DR strategy? You need to know when your SharePoint farm will suffer the least from performing a backup, and when your backups will perform the best.  You don’t want to do a backup when your server is under a high load and there is a lot of traffic on it. It’s not a good idea to guess when an ideal time to perform a backup is. By using tools like performance monitors you can know for sure what kind of impact your backup will have on your end users.

Something else you need to keep in mind about when you backup, is how long your backup will take.  You need to know how long your backup will take to complete so that you can manage your backup within the window that you have determined by your diagnostic tool. The diagnostic tool not only tells you when it is okay to start your backup, but when it should finish by.  Is this really an issue? Absolutely. When I was doing quality assurance testing for Idera’s SharePoint Backup, some of the tests I would perform took days. Not minutes or hours, but days to complete. (This was of course under a very rare and unusual circumstance in a unique environment that you likely don’t have, but it is worth noting.)  Starting a backup on Friday afternoon that doesn’t complete until sometime Wednesday morning that locks up your entire SharePoint farm is not going to make your end users very happy. It is also a good idea to use the performance monitoring tool to see how taxing a backup is on your farm. And to constantly adjust your DR strategy around it. SharePoint is not a static environment. Your end user’s habits change. You need to stay up to speed with what their needs are so that you can accommodate them and work around them.

Fortunately most of the companies that sell DR products for SharePoint also sell performance monitoring tools for SharePoint. There is a reason that those tools exist, and this is one of them.

March 20, 2013

Target Discrimination – What do you backup?

Filed under: Disaster Recovery, SharePoint — slingeronline @ 12:28 pm

For those that don’t know me, I am an avid firearms enthusiast.  Handguns, Rifles, etc.  One of the things that responsible shooters practice is target discrimination.  Target Discrimination is the delicate art of hitting exactly what you want to hit, and missing exactly what you want to miss.  So how does this relate to Disaster Recovery? All too often, there is a corporate policy for Disaster Recovery that I call “Carpet Bombing.”  Everything is backed up. Everything. Hard drive snapshots are created every 6 hours. A full system image is created every night. Information that has been sitting dormant for 4 years, 7 months, and 13 days, gets backed up every night, making a new copy of the same information. Even if a backup system is advanced enough to do differential or incremental backups, an index of this unchanged file is created every time.

Perhaps it would be better to imagine this in terms of physical files, and not computer files.  You have a file cabinet that is full of thousands of documents.  Every night, it is the responsibility of one person to photocopy every single file, and then return them to their respective file drawer.  In the case of incremental and differential backups, the file itself is not copied, but the index card that says where the file is, what is in it, and when it was last changed, among other identifying features, is copied. If the change was more recent than what ever the disaster recovery policy says, the actual file is found and added to the xerox pile.  If the company policy is that no photocopies older than 7 days are allowed to be used, every 7 days every single file gets dragged out of the cabinet and photocopied again.

This is where target discrimination comes in. Some files do change, and some don’t. If the file folder for the 10 year lease on the building doesn’t ever change, why would we drag it out and create a backup of it every week? It is apparent that it doesn’t make a lot of sense in a paper world, but we do it in the computer world daily and think nothing of it.  We drag entire hard drives over to the photocopier and make a copy of the whole mess, whether they need it or not. We end up with 137 copies of the exact same file, spread across backups, taking up valuable storage space, and potentially creating a new kind of disaster.

In the paper world, there is one backup. Maybe several versions of the same file, but not many. Actual physical space was much more expensive than computer storage space, so a policy was usually created to only keep so many backup copies of a file. After a certain amount of time, they were shredded. Here is an example. Go to your file cabinet, and pull out your tax forms from 1997. You probably don’t have them anymore. There is no need for them, so they got pitched, shredded or burned in the barbecue pit. Now pull out your tax forms from last year. You still have those. You may not need them, but you have them just in case. (You should anyways.) Now, do you make a backup copy of these every week? Kind of silly to do so isn’t it? If they were computer files though, then for some reason it makes perfect sense.  This is what target discrimination is. Only backup what needs to be backed up. Discriminate the targets of your disaster recovery policy. Be selective about what gets backed up, and how often. Imagine that you have to create a physical print out of every file to get a back up, and then determine what is “mission critical” to be backed up, and what hasn’t changed since 1997.

Just like with physical paper files, it is okay to not create a backup of a file that has not changed since the last backup. Create backups of the important stuff, but let go of the theory of “we have to back up everything, everytime, just in case, just to be sure!”  When you do start discriminating your targets, you will notice something; your backups will take much less time to execute and your used storage space allocated for backups will shrink.

So how do you decide? Well, you are going to have to talk to your end users, and discuss what their needs are, and once they tell you, add what those needs are to some documentation that will govern how you manage your SharePoint farm. (I think it’s called “governance?”) Different groups in your organization are going to have different needs. Just like in your household, different aspects of paperwork have different needs. You don’t need to keep a copy of a school permission slip around for the same amount of time that you need to keep a copy of your mortgage. You probably don’t even need a backup copy of the permission slip, but should probably have several backup copies of other vital documentation, such as your mortgage.  You don’t need to keep a holiday newsletter from five years ago, but you probably should keep the financial records from five years ago handy.

A good way to determine how to discriminate your disaster recovery targets is to imagine that the files actually are physical paper.  Whatever the actual physical paper policy would be, it should be relatively simple to translate into computer disaster recovery terms.  If the file doesn’t change but once a year, you really shouldn’t feel the need to back it up once a week.

So, instead of “carpet bombing” your computer assets with a disaster recovery strategy that simply says “make backups,” a little bit of thought into discriminating what targets need to be backed up and how often, could save large amounts of storage space and system resources.  After all, you don’t need to have 137 copies of your teenage child’s 3rd grade Christmas concert program; one copy will suffice.

Create a free website or blog at