Systems Engineer trapped on Earth...

The UltraDNS API and Powershell

cloudyAt work we’re currently using Neustar’s UltraDNS service to host 200+ DNS records, and I started a project to automate changing IP addresses to switch to DR sites. There is a well documented API for this, with great examples and solutions built mostly on Python, Perl, and Java. UltraDNS has published examples for all three of those on their Github page, and there is even a Perl module available on CPAN.

Since Powershell is my current lingua franca, I put together this rough writeup with some test code. I’ll eventually shape things into a Powershell module that does a few different functions, then tie it into a web interface to fully automate switching to DR IP addresses. The rest of my team is of course, in a nerdgasmic state over being able to press a few buttons to accomplish this.

First, have a look at the, which gives you a quick overview. UltraDNS customers also have access to more in depth documentation, including a full user guide. Some familiarity with using REST web services, Powershell’s built in commands for them, and reading the API documentation is helpful.

The API is at and uses tokens to authenticate. To use the service, you’ll need to first build a call to get an access token. This token has to be passed on to subsequent calls. The code below will return the value of the accessToken property. It goes without saying the credentials should be hidden in production.

In this example I’m using the test URI. This call to /token will return some other output. Remove the pipe to Select-Object to see it.

Now I can get my access token any time with;

This code gets the A record(s) for a given domain;

The New Relic API and Powershell Pt. 2

cloudy In a previous post I wrote about using Powershell to work with the New Relic monitoring service API, which gives you the ability to query performance metrics it gathers about your application. Once you have this data, it is easy to do whatever you want with it; store it in a database, send a formatted report in email, etc. In the first post I included snippets of script code and instructions on how to connect to the API, pass the API key, and get a summary of metric data using the built-in XML parsing capabilities in Powershell.

What if you want more than just a summary? What if you also use the sever monitoring features and want data about those metrics too? You can pull just about any New Relic metric data you want, and that’s the subject of this second post.

First, I retrieved all of the metric names for a server. You can also specify a specific agent for the application monitoring functionality;

The above will return all the metric names and write them to a text file. Once you know all of the names, it’s time to start building the code to get to the data. In this example we’ll use the server monitoring metric System/CPU/System/percent to get an XML formatted response:


The above will return the following XML data;

<metric name=”System/CPU/System/percent”>
<fields type=”array”>
<field name=”average_exclusive_time”/>
<field name=”average_response_time”/>
<field name=”average_value”/>
<field name=”call_count”/>
<field name=”calls_per_minute”/>
<field name=”max_response_time”/>
<field name=”min_response_time”/>

Armed with this info, I was then able to build a call that got the average value of the CPU for the server in the last 24 hours. I found that the API expects a very specific timestamp within the metric URL so I used get-date to format it the way I wanted it and set the start time to 24 hours ago. There may be a more elegant way to insert the date, but this is what I came up with.

Then I built a variable containing the string in the proper format. The API will return errors if it isn’t.

Finally, I built the API call and retrieved the data. Note how I’ve specified my $begin variable, the metric name, the average_value field, and the agent_id in the $url variable.

$servCPU will now hold the raw value, you can round it up or leave it as is. You can use this method for all of the metrics available as well as the other fields in the CPU data. New Relic has more documentation for their API at

The New Relic API and Powershell

I’ve used the awesome performance monitoring tool New Relic to gather diagnostics and other stats for applications. I thought it would be a really cool idea to get some of the metrics using the New Relic API, but there wasn’t much information on how to do it with Powershell. This is relatively simple though, and can be done in different ways depending on the version you use. The code I’m using is 2.0 but I’ll include some snippets of 3.0 equivalent.

The New Relic API is REST but requires authentication. Similar to other services, you will need to enable the API access for your account. This generates an API key you will need to authenticate. There is a good document of the API on Github @

With this simple bit of code, you can get and parse the data returned by New Relic;

Using the XML abilities baked into Powershell, I can now use the common dotted notation to get what I want. Also I’m doing a replace on the returned data to get rid of the hyphen in it, which trips Powershell up no matter what I did to try and escape it. I decided to circle back to that little problem later and just did the replace. Keep in mind this only applied to the summary metrics data that I wanted and may not be needed for some of the other API calls.

Here is an example of returning all the New Relic metric names. Here I am using the 3.0 auto foreach which automatically outputs all of the values. You will need a real loop in 2.0;

With Powershell 3, you can also use the built in REST commands and it’s auto foreach to achieve the same results and more. I didn’t dive into that much because most of my production environments are using 2.0 but here’s a sample of how that would look;

Using ESEUTIL To Copy Large Files

Many Exchange admins are familiar with the venerable Exchange database utility ESEUTIL. I’ve used it many times when working with Exchange databases, and it still exists in Exchange 2010. Recently a DBA coworker and I had a scenario where log shipping for a customer’s site was taking way to long to complete due to a slow network. The DR site is in California on the other side of the country and this was affecting our ability to keep things updated.

We experimented with different file transfer tools like Robocopy, etc. and others with little success until we discovered you can use ESEUTIL to move large files with a respectable performance gain. This is because the utility is designed to move and work with large Exchange databases, and typically does not have a lot of I\O buffer during copy operations. This MSDN blog post outlines the details.

I whipped up a nifty Powershell script that actually gathers the SQL transaction logs (in our case generated by RedGate SQL Backup) that have the archive bit set, calls ESEUTIL to copy them to the DR site, then unsets the bit. All you need is eseutil.exe and ese.DLL from an Exchange server placed somewhere on the SQL server so the script can call it. We saw about a 20% increase in copy speeds when using this method.