Midway last year I had a client that was constantly running out of disk space on their Exchange 2007 SAN LUNs and this was causing issues on the Exchange servers. They would run out of space on the database drives and the log drives which would subsequently dismount the databases. In order to provide a better reporting system on the actual changes from day to day we wanted to report back the free space on each drive. To complicate things a bit, as most drives were already very low, I added the whitespace in the databases to the reports to get the most accurate picture of what was free for Exchange to use. With this report we were able to prevent excess mailbox moves if there was sufficient space reported.
* The scipt has been linked here – Exchange2007-DiskSpace-Report – Rename the file to .ps1 to use.
To create this script, I used portions of my own scripts and combined it with scripts other people had created for a part of my process. In the end, what I wanted was a daily report that would combine the disk space and whitespace into one number for each database. This report would be emailed in HTML format and placed in the body of the email. The report will also be attached as an XLS spreadsheet as well. This part would prove to be troublesome.
The first part of the script would basically query the WMI objects on the servers to get an accurate report on the disk space. [As a side note, the database and log drives were all set as mount points on a root drive. All drives were on a NetApp SAN.]
The second step imports the CSV from the first step and removes extraneous drive information (i.e. system drive, log drives, etc.) and creates a filtered file for the next step. The real issue with this step is that you have to know your server drives and what each drive goes to. This is not automatic, as of yet.
Next, the script queries the event logs of the Exchange Server from the past day looking for 1221 events which report the whitespace available in the database. While running this, the reported database names are shortened to make processing easier. These results are written to a CSV file.
Then we combine the files from step two and three to make one file with all the free space added together. This creates CSV number four.
Once these are combined, we need to sort the percentages to make the report more useful. This is stored in the fifth CSV file. [‘sort the percentages’]
After these are sorted all percentages are converted properly formatted. This is stored in the sixth CSV file. [‘convert percentage column to percentage’]
We then convert the last file into an XLS spreadsheet, which was a trick to do when scheduling this as you have to leave the workstation logged in as the user who is running the script. It will not work any other way [confirmed by Microsoft document on this].
After all of those steps are completed, we need to run the same loop for the next server and so on until all servers are completed. Once all the server reports are run, the final step is to prepare and send an email out to all those who require the report.
Finally after all is said and done the end part does the cleanup. All files created are appended with the current date and moved to a storage folder just in case.
This last step is really not necessary and could cause a drive to fill up, but it is there for possible use if something in the script fails.
Notes on the script
* This script is a bit inefficient at this time as a lot of data is currently stored in CSV files. This was done for ease of use for a first time coder like me and it also leave the data behind for portability.
* Absolutely no guarantee for the scripts is given as this has only been fully vetted for one environment. The script is being provided for educational purposes and hopefully someone besides me will find a use for it.
* Future improvements will include prebuilt CSV files and storing data in array variables as well as looping the script for all Exchange servers (either gathered from CSV or using PowerShell to create a pre-populated array.
* I have also created a script to handle log drives as log drives do not have whitespace to calculate. Look for a future post on this.
* Pauses were inserted to help with the processing as it takes time for the scripts to gather the data and place it into the required files. Total time to run this was 4-5 min because of the number of databases and pauses.