Well, I did it. First one under the belt. I was chosen as a presenter for SharePoint Saturday Dallas (SPSDFW) and presented my session this past weekend. I had a blast, was a bundle of nerves, and overlooked some of the stuff in my notes. (I need to remember to keep to those a little closer next time.) My session wasn’t a sellout, or standing room only, but that’s OK. It was a great learning experience, and my next presentation will be better. I will be submitting for SPSSA (San Antonio)/SPSAustin, and SPSHOU (Houston) this coming year, just to round out the Texas SharePoint Saturdays. I might throw Houston’s Techfest in there also, not sure. (Still contemplating HSPUG) In any case, I do have a slide deck to post here, and I will be posting a blog article about the main theme of my presentation in the next few weeks. In the meantime… slides.
November 4, 2013
August 30, 2013
For those of us that talk about SharePoint Disaster Recovery, we often mention that any Disaster Recovery strategy needs to be part of an overall SharePoint Governance plan, but I don’t think that any of us have gone into any detail about what that governance plan should include. If any bloggers have, then please reference their blog posts in addition to this. For this blog post I am going to show what our disaster recovery plan is for a governance plan that I am trying to implement at our organization. Please keep in mind that this is part of a much larger detailed governance plan for a SharePoint environment.
July 22, 2013
I’ve not worked at a lot of companies that use SharePoint in my SharePoint career; I’ve worked at three. At all three companies, SharePoint was implemented sometimes as an afterthought, in most cases by someone who didn’t know anything about SharePoint and what it could do when it was initially set up. Implemented isn’t the right word. Managed doesn’t seem to fit either. Whatever the person who tells the person who implements SharePoint does, that’s the word we’re looking for. That person, is one of the problems with adoption in SharePoint. The one who says “go make me a SharePoint.” That person, at all three companies that I have worked for, has done one thing horrendously wrong in their implementation of SharePoint. They have left the ideas of content types and views behind. Probably the 2nd and 3rd most powerful features of SharePoint, and they are completely ignored. Instead what you have is users who try very diligently to recreate their network shared folder structure in document libraries. At engineering companies, this is disastrous. I have seen document libraries that have 150 folders in them of varying levels of hierarchy, and 3 documents.
This is how that happened. The project manager, sponsor, or whoever is “in charge” of implementing SharePoint has the idea that SharePoint is this large and scary tool that users are going to need to get used to. For someone who has never used SharePoint, they are absolutely correct. They will then bring up the concept of “crawl, walk, run.” At first they want to duplicate the network shared folders in document libraries so that users get used to SharePoint. For my first installation, I agreed that it would be best. I know better now. The last thing you want to do is to allows users to bring their bad habits from the network shared folders into SharePoint. You’ll end up with SharePoint libraries containing 150 folders, that are empty. At these organizations, SharePoint has grown “organically” which basically means “ we had no idea what we were doing so there was no plan in place for adoption, governance, backup, etc.”
SharePoint is a tool that is large and scary, and I understand how users can be intimidated by it. As I was working with SharePoint, I had an “a-ha” moment, where I suddenly “got it.” From that point on, I wanted SharePoint to be a part of everything I worked with. I wanted the “My Documents” folder on my home computer to allow me to set custom metadata columns so that I could treat it like a document library, with different views. And I wanted other people to see how brilliant the idea of views was.
I attempted to show a user how genius views were and I thought I was getting somewhere until I heard this; “So what folder would I find that view in?” Phooey! The user has brought their idea of how information should be “structured” with them from the world of the network folder and has infected SharePoint with it. Maybe I would have better luck showing how views work on a list… I showed a different user how to create a view of a list based on sorting, filtering, grouping, and so on. That way they wouldn’t have to go to the list, and then select “filter” and “sort” values from the column headers every time. Again, I thought I was making progress, I thought I was getting through to the user, and that they might have almost gotten the idea. “Wouldn’t this be easier to do in Excel?”
Users would embrace SharePoint, and become evangelists for SharePoint in their organizations, if they would just get out of their own way. And it’s not their fault that they have this limited idea of what SharePoint can do, instead treating it like a glorified FTP site. What we get from users however, are gems like this; “SharePoint was not designed as a collaboration tool.” (Yes, I actually heard that from a user, at the first company where I worked as a SharePoint Admin.) It is our fault. SharePoint Administrators, Architects, Developers, Consultants, Analysts, and so on, it is entirely our fault. We have failed our end users. We allowed them to try to crawl. We encouraged them to walk. We pushed them to run. What we should have done was shove them off of a cliff so that they had no choice but to dive headlong into SharePoint, flailing and kicking and screaming, until they learn that we have actually given them wings, and they can fly. Imagine what the adoption rate of SharePoint would be if every user that used it “got it” because they didn’t have a choice?
July 19, 2013
The SharePoint community is huge, and somehow all of us are friendly and sociable with each other, even if we work for rival companies. Many of us in the SharePoint community also have blogs where we share our experiences and try to help each other out. SharePoint may not be as complex as a Turbo Encabulator, but it is still pretty complex. There is one thing that drives the SharePoint bloggers nuts though, and that is content thieves.
I’m not an MVP. I haven’t written a book. I’m not special or a superhero in the SharePoint community, and I have a SharePoint Blog. (The one you happen to be reading right now, as a matter of fact.) There is one thing that I have not done in my blog, and never will. I have never copied someone else’s content as my own. I may reference other SharePoint blogs in my posts from time to time, but I always give credit where credit is due. If you have a SharePoint blog where you share your experiences, issues, problems, solutions, recommendations, and advice; Good for you! We can all benefit from your knowledge. If you have a SharePoint blog that does not have a shred of your own content, but is instead a conglomeration of articles from other people’s SharePoint blogs, chances are you are not well liked in the community. If work with SharePoint, and cannot come up with your own content for a blog, then perhaps you either don’t actually work with SharePoint, or at the very least you shouldn’t have a blog about it. I started my SharePoint blog based on my experience from the maddening frustration of working with SharePoint 2007 and AutoCAD. When I started working at an ISV, I still ran into some issues and blogged about those. After I left the ISV and became a SharePoint Admin again out in the real world, I wrote about some of what I learned while at the ISV. It may be knowledge gained from other people in my experience, but it is strictly my interpretation. I’m not ashamed to say that I learned what I know from someone else.
In short, if you can’t come up with your own content for your blog and have to copy everyone else’s work, then KNOCK IT OFF! (It might not hurt for you to go back and look at some of your old blog posts either. When the SharePoint bloggers find out about it, they have a habit of changing the images that you shamelessly copied, URL and all, to images that you probably don’t want associated with you. This is also bandwidth theft, you moron.)
So basically to sum it up. If you have a SharePoint blog, rock on! You are part of what makes the SharePoint community great! If you have a SharePoint blog that is all someone else’s content, you suck and STFU!*
* STFU does not stand for SharePoint Technical Framework University, yet.
June 28, 2013
A while back a user I follow on twitter, Wendy Neal (@SharePointWendy), asked a question about who is responsible for backing up SharePoint. Not something so existential as it needing to be “Dave, from Accounting,” but more of a roles type discussion. Is it the SharePoint Administrator’s role, the Server admin? She, Sean McDonough (@SPMcdonough), and I had a brief little discussion about it. Granted it was mostly Sean and Wendy. This is because Sean is Sean (one of the authors of the SharePoint Disaster Recovery Guides). It does bear repeating though, since the question was asked, and it fits nicely into my series of Blog posts on SharePoint Disaster Recovery.
So who is responsible? If you don’t know this, then it is time to start asking some serious questions of the department that owns your company’s SharePoint installation. The person responsible might turn out to be you. It also might be “no one” which loosely translates into “you.” The official answer is of course, “It depends.” What aspect of your disaster recovery plan are you trying to cover? In most larger organizations, the responsibility will be shared among several individuals;
- Hardware – Hardware should be the responsibility of the server administrators. This would be in the case of total catastrophic failures, so that you can restore from bare metal.
- Databases – SharePoint runs on many Databases. While digging into them is not recommend, or supported, entire databases can be backed up and restored with no ill effects other than some minor restore pains.
- Content – SharePoint content pretty much falls to the SharePoint administrator. Granted cooperation with the Server and Database Administrators is a necessity, but ultimately, if a user cannot find or access their content, or something goes wrong with SharePoint, the end users are going to look to the SharePoint Administrator to both blame, and to solve their problems.
If you are a SharePoint Administrator, congratulations!
Short Version; You are now responsible for ensuring content is safe and secure for your end users. Fortunately, it isn’t that difficult to do, and I hope my last several blog posts have helped.
Long Version; For any world class disaster recovery scenario to work, all of the administrators need to work together to ensure that your disaster recovery plan can account for any level of disaster that may strike your farm. If your database administrator has set up mirroring as a disaster recovery and/or high availability strategy, you need to make sure that your SharePoint farm is aware of it and configure it appropriately. Your database administrator also needs to know that some SharePoint databases don’t do mirroring well, if you have those particular service applications in your farm. Your server administrator needs to understand that SharePoint isn’t simply a piece of software that resides on a single machine, but is rather a complex combination of several servers that make up the whole farm. Not all Web Front Ends are the same, and your application servers, and WFEs are not interchangeable for DR purposes. Taking a snapshot of one WFE in a farm that has ten probably isn’t going to help much after a disaster strikes, simply because of the subtle differences between them. Granted, a WFE is probably easier to restore to a farm than an application server, but your server administrator needs to know this. There is a good chance that the server admin and the db admin aren’t certain of the magic that makes a SharePoint farm work smoothly. They may want to make changes that Microsoft has suggested is a bad idea, which incidentally, is a bad idea. As a SharePoint Admin, it is your responsibility to make sure that they are aware of this. So why Milton? Sometimes getting a server admin, or a db admin to understand what you are trying to tell them is difficult, especially with a technology that is as potentially complex as SharePoint. There are times when you might feel like Milton from the movie Office Space, and no one is listening to you. Do what you can with the tools you have, such as Central Administration, until you can get what you need. You might also notice that when I laid out the different areas of a DR strategy, I used the word “should” a lot. Just because a particular admin “should” do something, doesn’t mean that they actually are. It is probably not a bad idea for you to ensure that these things are happening.
May 14, 2013
If you’ve been playing along, I’ve talked about what, when, where, and why you should create backups. One you have a DR plan, that includes what you need to backup, how often, and where it is stored, with plenty of documentation, we can finally get down to what tools you should use to perform your backups. (If you bought the tools first, and then planned your strategy around what the tools could or could not do, please throw away your DR plan and start over.) There are a myriad of tools out there to backup and restore content in SharePoint specifically, as well as on your servers in general. Keep this in mind though; just because a tool can be used in a certain way, that doesn’t always mean it should. Don’t compromise your DR strategy because a tool doesn’t cover it. Find the tool to fit your needs, don’t change your needs to fit the tool. Look at the “toolkit” pictured below; What happens if you need a Phillip’s Head Screwdriver? You obviously don’t have the right tool for that.
So what tools should you use? Well if you have planned out your DR strategy in great detail, and included specific items that you absolutely must create backups of, and what you can reasonably get away with not having backed up, the choice should become fairly obvious. Get the least expensive that meets all of your requirements. Do you need all of your SPD workflows backed up? What about workflow history? Do need versions of your content backed up? Permissions? Customizations? There is a lot to consider. Your environment may have some stuff in it that some backup tools don’t handle very well, or even at all. If you decide that using the built in Central Administration tool is all you need, great. I hope you don’t have any custom web.config files. And you might find yourself installing all of your 3rd party features again after a disaster strikes. It would probably be better to get a tool that covers those instances. You may also want to give certain users the ability to restore their own content, depending on your environment. It all depends on what your DR strategy is. These are just a few examples of the thousands of things that need to be taken into consideration. Now that you have this information, it is probably a good idea to go back and look at your DR strategy again. If you see any cases of a reference to a specific piece of software in your DR plan, you did something wrong. Your DR plan should be as generic as possible. Got that? Your DR policy needs to be “specifically generic.” Be specific about the environment and what needs to be backed up, but generic about the tools you use to get there. Once you have everything in your DR strategy figured out, hand off deciding which software to buy to someone else if you can. (It’s the easiest way to prevent bias.) To get you started, here are some of the most popular 3rd party disaster recovery tools available, in no particular order.
Each one has strengths and weaknesses. Make sure that what you do get, if your DR policy needs a 3rd party tool, meets all of your requirements. With whatever you do choose, test it to make sure meets your requirements. If you find after testing that you are running into some roadblocks, read this.
May 3, 2013
In the past few blog posts, I have gone over the What, When, and Where of SharePoint Disaster Recovery planning. So, why? Why do you need any type of Disaster Recovery plan? You have a farm that has ten web front ends that are all load balanced and set for automatic failover. You have two app servers with mirroring set between them, as well as two separate servers dedicated solely to Search with mirroring as well. You have a clustered database server set for automatic failover. Your high security/high availability set up is second only to the one at Microsoft HQ. The reason that you need a Disaster Recovery strategy is that disasters will strike even the most robust of environments. Besides, it is always better to have it and not need it, than to need it and not have it. Murphy’s Laws being what they are, simply having a good disaster recovery strategy will probably prevent you from needing it.
So what kind of disasters could befall such a glorious and robust SharePoint farm that would bring it to its knees and make Administrators, Architects, and Developers weep? Two words; “End Users.” End users are notorious for finding new and creative ways to make things not work correctly. End users are famous for accidentally obliterating months old objects that they discover were needed mere seconds after they were irreparably removed or damaged. So much so that I have come up with a name for the kinds of chaos and havoc that End Users inflict on SharePoint environments; “Tiny Disasters.” Just like a toddler sticking a PB&J in your Blu-Ray player, End Users are the worst enemy of healthy SharePoint environments. They are also the reason it exists, so we can’t realistically say “Out of the pool!” (Wouldn’t that be nice though?) There are ways that you can try to minimize the damage that End Users will inflict, but there are always exceptions. To minimize the damage that users can unleash upon your SharePoint farm, a good set of governance is a brilliant place to start. Limit who can do what. If you have not yet created a custom permission that is CRU
D (Create, Read, Update, Delete) what are you waiting for? Not every contributing user needs to be able to delete. You also need to make sure that only the stuff that is supposed to go in SharePoint does. You don’t really need 1,300 copies of Bruno Mars mp3s stored in your farm. Ultimately, be ready for the end user that manages to upload something that causes damage. If you aren’t prepared for it, it will happen.
One of the most powerful tools that your end users have is the Content Query Web Part. Unfortunately, it is powerful enough to really mess things up. Bad code, whether it was developed in house or from a third party, can also exist in your farm. Yes, even ISVs will write code that may not play nice with your farm. Maybe not directly, but ISVs are not known for testing compatibility with their competitor’s products. I know that last bit from personal experience, as I worked at an ISV, and when trying to test compatibility with another ISVs product, there was some friction.
How do you mitigate this unending flow of terror that is constantly unleashed upon your precious environment? Have backups. Have a Disaster Recovery strategy in place. Test it. Test it often. Just because it worked once doesn’t mean it works now. And the easier it is to recover from a Tiny Disaster, the stress of having them will fade from major paranoia to merely mild inconvenience. Practice with Tiny Disasters also makes it much easier to manage the catastrophes. You will already know your recovery plan and know how to be the most effective with it. Knowing you can recover from anything your End Users throw at your environment will help you sleep easier at night.
Ultimately the best reason to get backups of your SharePoint farm, is so you won’t need them.
April 2, 2013
If you have been following along the past few posts, then you have a pretty good idea what you need to backup, and when you need to back it up. So once you do have a governance policy in place that says what you need to backup, and when your backups need to run, you are set to go right? Sure, but where are you going to keep the backup files? I would like to say that this is a "no-brainer", but I have seen it many times where the backup files were stored on the very server that was backed up. I will admit that I am guilty of it as well, although under very unique circumstances. (Those circumstances being that it was a testing environment and I didn’t really care if the entire farm went belly up, I was in a position to lose exactly no critical data at all.) So where do you keep those files? When deciding where to keep your backup files there will be several things to consider; you will need to weigh the importance of each aspect to decide which option is best for your organization.
Backup Latency is how quickly a backup file can be stored to the selected location. Selecting a local drive to store the backup file will be extremely fast, while selecting to store your backup in the cloud will likely be significantly slower.
While storing a backup file locally is very quick, it is not very secure. If something happens to your server that makes it inaccessible, chances are your backup file is also inaccessible, which means it is useless.
Restore Latency is how fast you can get the information out of your backup and into your environment. Although there is a close relation between Backup Latency and Restore Latency, they are not identical. Often you do not need to restore the entire contents of your backup, but rather a small portion of it. (I plan to address this in more detail in a future post.)
So where are the best places to store backup files? Below is a table* that lists some common locations and how they rank in my experience. Keep in mind that your mileage may vary.
|Storage||Backup Latency||Safety||Restore Latency|
|Portable Hard Drive||Medium||Low||High|
Many people will wonder why I said Tape only has medium safety. It can be very safe, if it is stored offsite. If you keep your tapes next to your server it isn’t much better than a shared drive in terms of safety. (Which makes it the one of the worst options to go with, unless you have a strict policy around tape management that will prevent the tapes from being stacked on top of the server rack.) The same can be said of the removable hard drive. Typically they are left plugged into the server and just stacked on top of the rack. If there is a strictly enforced policy that says the the portable drive must be kept off site unless it is being actively used in the backup or restore process, safety considerations go up. The reason that this is important is because not every disaster is related to software or hardware failure. Sometimes environmental conditions are the cause of your disaster. If your server room floods and all of your backups are kept in the same room with your server, you didn’t really have a backup after all did you? You can tell that there are going to be other cost considerations as well. Convenience is one and the actual financial costs should always be a consideration. I’ve added those to the table* below.
|Storage||Backup Latency||Safety||Restore Latency||Convenience||Cost|
|Portable Hard Drive||Medium||Low||High||Medium||Medium|
You can tell that there isn’t really a best, “one size fits all” solution. Each has its advantages and disadvantages. It truly depends on what your needs are and what your organization can afford. Please note that there also is no real correlation between cost, safety, convenience, etc. I should also note that with Tape and Portable Hard Drives you can increase the safety, but it will also decrease the relative convenience. Keep all of this in mind and be sure to include it in your disaster recovery planning. Cost can also increase with the amount of storage space required. What do you want to keep in your backups? If you keep your SharePoint backups on the local drive and then through a different disaster recovery policy backup the entire server to the cloud, you just backed up your backup, and that uses storage space. Your backup will take a little longer, and use up a little more storage space. This will eventually add up. Instead of backing up SharePoint to the local server and then backing up the local server to the cloud, including the backup, it probably makes more sense to backup SharePoint to the same location initially. This also means that in the event of a disaster you don’t have to restore the backup from the restored backup. (If we keep this up we will end up in a cyclic redundancy loop, like having Doritos Locos Tacos flavored Doritos. Are we next going to have some Doritos Locos Tacos flavored Doritos Locos Tacos?)
*The values in the tables are not from any official source, but just from my personal experience. Your experience may vary greatly from mine. This is only meant as a guide to what you should consider when choosing a location to store your backup files.
March 29, 2013
In my last post I spoke about what needed to be backed up. In this one we will address something else that you need to consider in your disaster recovery plan – when.
Ideally you want to create a backup of your content when no one is using it. Here’s part of why. If you start a backup at 8:00 am on Monday morning, you are probably going to have some unhappy users. When you start a granular backup, using a 3rd party tool, or the built-in tools like Powershell or Central Administration, the first thing that SharePoint does is lock the Site Collection that is targeted in your backup. (Doing full farm or database backups does not cause this behavior, but there are other limitations.) While the Site Collection is locked, the users can’t do anything but look at the content in SharePoint. Anything that would change the contents of any list or library is stopped. Workflows don’t start. Users cannot update list items or create new documents. Everything grinds to a halt. Suddenly SharePoint becomes more like a stuffy museum instead of a petting zoo. (SharePoint was meant to be more like a petting zoo.) If this happens to your users they will probably not be happy. There is a way around this. You can opt to not lock sites when you create your backup. This presents another problem that Sean McDonough went into here.
So how do you know when to backup your SharePoint content? You can assume when no one will be using your SharePoint site and just create backups over the weekend, but this is only hit and miss, and you might be impacting more of your users than you originally thought. This is why they make diagnostic and performance monitoring tools for SharePoint.
Think of diagnostic tools for SharePoint like the OBD-II sensors in your car. Your car has a myriad of sensors to monitor its performance; fuel sensors, air sensors, and so on. When something is amiss, your car will let you know by a little light on your dashboard. It may not tell you what is wrong, just that something is wrong. When you take the car to a mechanic however, they can pull a code and know exactly what is wrong and what needs to be repaired. Without a diagnostic tool like this for SharePoint, you may know that something is wrong, but may not be able to determine what. And a diagnostic tool will also let you know if something is going to go wrong before it does, so that you can address it before your end users notice.
So how does a diagnostic tool fit into a DR strategy? You need to know when your SharePoint farm will suffer the least from performing a backup, and when your backups will perform the best. You don’t want to do a backup when your server is under a high load and there is a lot of traffic on it. It’s not a good idea to guess when an ideal time to perform a backup is. By using tools like performance monitors you can know for sure what kind of impact your backup will have on your end users.
Something else you need to keep in mind about when you backup, is how long your backup will take. You need to know how long your backup will take to complete so that you can manage your backup within the window that you have determined by your diagnostic tool. The diagnostic tool not only tells you when it is okay to start your backup, but when it should finish by. Is this really an issue? Absolutely. When I was doing quality assurance testing for Idera’s SharePoint Backup, some of the tests I would perform took days. Not minutes or hours, but days to complete. (This was of course under a very rare and unusual circumstance in a unique environment that you likely don’t have, but it is worth noting.) Starting a backup on Friday afternoon that doesn’t complete until sometime Wednesday morning that locks up your entire SharePoint farm is not going to make your end users very happy. It is also a good idea to use the performance monitoring tool to see how taxing a backup is on your farm. And to constantly adjust your DR strategy around it. SharePoint is not a static environment. Your end user’s habits change. You need to stay up to speed with what their needs are so that you can accommodate them and work around them.
Fortunately most of the companies that sell DR products for SharePoint also sell performance monitoring tools for SharePoint. There is a reason that those tools exist, and this is one of them.
March 20, 2013
For those that don’t know me, I am an avid firearms enthusiast. Handguns, Rifles, etc. One of the things that responsible shooters practice is target discrimination. Target Discrimination is the delicate art of hitting exactly what you want to hit, and missing exactly what you want to miss. So how does this relate to Disaster Recovery? All too often, there is a corporate policy for Disaster Recovery that I call “Carpet Bombing.” Everything is backed up. Everything. Hard drive snapshots are created every 6 hours. A full system image is created every night. Information that has been sitting dormant for 4 years, 7 months, and 13 days, gets backed up every night, making a new copy of the same information. Even if a backup system is advanced enough to do differential or incremental backups, an index of this unchanged file is created every time.
Perhaps it would be better to imagine this in terms of physical files, and not computer files. You have a file cabinet that is full of thousands of documents. Every night, it is the responsibility of one person to photocopy every single file, and then return them to their respective file drawer. In the case of incremental and differential backups, the file itself is not copied, but the index card that says where the file is, what is in it, and when it was last changed, among other identifying features, is copied. If the change was more recent than what ever the disaster recovery policy says, the actual file is found and added to the xerox pile. If the company policy is that no photocopies older than 7 days are allowed to be used, every 7 days every single file gets dragged out of the cabinet and photocopied again.
This is where target discrimination comes in. Some files do change, and some don’t. If the file folder for the 10 year lease on the building doesn’t ever change, why would we drag it out and create a backup of it every week? It is apparent that it doesn’t make a lot of sense in a paper world, but we do it in the computer world daily and think nothing of it. We drag entire hard drives over to the photocopier and make a copy of the whole mess, whether they need it or not. We end up with 137 copies of the exact same file, spread across backups, taking up valuable storage space, and potentially creating a new kind of disaster.
In the paper world, there is one backup. Maybe several versions of the same file, but not many. Actual physical space was much more expensive than computer storage space, so a policy was usually created to only keep so many backup copies of a file. After a certain amount of time, they were shredded. Here is an example. Go to your file cabinet, and pull out your tax forms from 1997. You probably don’t have them anymore. There is no need for them, so they got pitched, shredded or burned in the barbecue pit. Now pull out your tax forms from last year. You still have those. You may not need them, but you have them just in case. (You should anyways.) Now, do you make a backup copy of these every week? Kind of silly to do so isn’t it? If they were computer files though, then for some reason it makes perfect sense. This is what target discrimination is. Only backup what needs to be backed up. Discriminate the targets of your disaster recovery policy. Be selective about what gets backed up, and how often. Imagine that you have to create a physical print out of every file to get a back up, and then determine what is “mission critical” to be backed up, and what hasn’t changed since 1997.
Just like with physical paper files, it is okay to not create a backup of a file that has not changed since the last backup. Create backups of the important stuff, but let go of the theory of “we have to back up everything, everytime, just in case, just to be sure!” When you do start discriminating your targets, you will notice something; your backups will take much less time to execute and your used storage space allocated for backups will shrink.
So how do you decide? Well, you are going to have to talk to your end users, and discuss what their needs are, and once they tell you, add what those needs are to some documentation that will govern how you manage your SharePoint farm. (I think it’s called “governance?”) Different groups in your organization are going to have different needs. Just like in your household, different aspects of paperwork have different needs. You don’t need to keep a copy of a school permission slip around for the same amount of time that you need to keep a copy of your mortgage. You probably don’t even need a backup copy of the permission slip, but should probably have several backup copies of other vital documentation, such as your mortgage. You don’t need to keep a holiday newsletter from five years ago, but you probably should keep the financial records from five years ago handy.
A good way to determine how to discriminate your disaster recovery targets is to imagine that the files actually are physical paper. Whatever the actual physical paper policy would be, it should be relatively simple to translate into computer disaster recovery terms. If the file doesn’t change but once a year, you really shouldn’t feel the need to back it up once a week.
So, instead of “carpet bombing” your computer assets with a disaster recovery strategy that simply says “make backups,” a little bit of thought into discriminating what targets need to be backed up and how often, could save large amounts of storage space and system resources. After all, you don’t need to have 137 copies of your teenage child’s 3rd grade Christmas concert program; one copy will suffice.