Where is the EndPoint setting for VM in new Azure portal?

The endpoint settings in VM allows us to configure the incoming traffic such as remote desktop, custom http ports (Teamcity, OctpusDeploy and etc.) to VM. It was pretty easy to find that setting on old HTML5 azure portal https://manage.windowsazure.com.

VM EndPoint Setting in old Azure portal

But you can’t use the new type of VM with a resource manager on old portal so you have no choice but to use the new Azure portal https://portal.azure.com. The problem (at least for me) came when I wanted to open some ports (endpoints) on new VM via new portal. It took me a while to search for it so I thought I will share it here for those who might have same issue.

Let’s see what you will get when you create a new VM with a resource manager.

Microsoft Azure Resource Group

By default, you will get the following things when you create a VM but of course, you have an option to choose what to create or what to re-use during the setup.

  • Virtual machine
  • Network Interface
  • Network Security Group
  • Public IP Address
  • Virtual network
  • Storage Account

Choose “Network Security Group” then you will see the setting page that looks similar to Windows Advanced Firewall interface on windows server or desktop.

 

 

Azure Network Security Group

Click on “Inbound security rules”. This is where you can enable the endpoint of your new VM. Of course, you forget to open the same port in your server OS as well.

Azure VM Firewall

Azure Website: ERROR_PROXY_GATEWAY in deploying an Azure website using MSDeploy from TeamCity

Error Message

[14:33:45][VSMSDeploy] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v12.0\Web\Microsoft.Web.Publishing.targets(4270, 5): error ERROR_PROXY_GATEWAY: Web deployment task failed. (Could not connect to the remote computer (“[yoursite].azurewebsites.net”) using the specified process (“Web Management Service”). This can happen if a proxy server is interrupting communication with the destination server. Disable the proxy server and try again. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_PROXY_GATEWAY.)

Solution

There is an official page that contains the list of error codes and possible resolution from IIS. What was suggested for ERROR_PROXY_GATEWAY is as below.

Diagnosis – A proxy gateway is preventing Web Deploy from communicating with the remote Web Deploy endpoint.Resolution – Web Deploy does not read system proxy settings. As a workaround, try disabling the system proxy:

  • Start Internet Explorer
  • Click Tools > Options
  • Click Connection
  • Click LAN Settings
  • Disable all checkboxes

It didn’t really help me directly because my build server is on Azure data center in same region where I host my test site and it used to work all the time since long time back. But when I tried to rdp the server then I found that there is a alert for the server restart. So, it seems to me that when the server needed to be restarted then it might limit the connection or something. I was still able to connect the team city website but it was just that msdeploy didn’t work so I did the restart and everything’s back to normal again.

So, if you are getting the same error all of a sudden, try to rdp to your server and see whether there is a restart alert or not. 

Azure WebJob Issue after package update – Host is not running

It took me a while to get the solution for this problem and Google gave me nothing when I search the error (warning) message so I thought its worth to share it here with you all.

Problem

Error Message: Host is not running; requests will be queued but not execute until the host is started

Screenshot

Web Job Issue

Solution

It happened because of the breaking changes in Beta 0.5 version. Azure used to have the problem with same name in different region due to their caching. I beleive that some parts of Azure tools like ‘Visual Studio’s publish dialog to azure’ and ‘SCM’ were not built to support the “same name/multiple regions” or multiple subscriptions in mind. I reported about “Problem: Issue with multiple Azure subscriptions with same name” before.

The solution for this issue is to make the class and method public. It’s an unusual practice for console programs but it’s how it is and it works.


public class Program {

private static Logger _logger = LogManager.GetCurrentClassLogger();

public static void Main(string[] args) {
_logger.Info("Test Main Started");
var host = new JobHost();
host.RunAndBlock();
_logger.Info("Test Main Ended");
}

public static async Task TestAsync([QueueTrigger("testqueue")] string _) {
_logger.Info("TestAsync Started");
await Task.Delay(30);
_logger.Info("TestAsync Ended");
}
}

And if you toggle the output of your webjob in SCM, you will see this log below. Then you will know that you need to make the class and methods public.

[INFO] No functions found. Try making job classes public and methods public static.

So, That’s it. Hope it helps saving some of your time.

 

Deploying the Azure website and Azure webjob from Octopusdeploy (+TeamCity)

Introduction

Last week, it has been a busy week for us but we are glad that managed to bring the Azure WebJobs with Octopus Deploy into one of our small projects. I like to share something that we have learnt and like to get some feedback from you guys.

Azure Web Job

Oh well, Scott Hanselman did a pretty good job on explaining about it in his blog post “Introducing Windows Azure WebJobs” so I am not going to repeat the same thing here. I will just give you a short note on this.

What is Azure Web Job?

It’s a backend job that you want to run it on Azure.  It’s like Windows Service that you run on your machine or the batch job that you used to run from Windows Schedular. There are three triggers points as below~

  1. Azure Storage: You can trigger the Azure webjob to run by sending a message to the blobs, queues or tables in your Azure storage account.
  2. Scheduler: Just like windows scheduler, you can run the web job either one time or on regular basis.
  3. Http/https Endpoint: You can run your web job by accessing this endpoint as well.

What are the different between Azure Web Job Vs Cloud Service (Worker Role) Vs Virtual Machine?

  • VM: You can install your backend job on VM but you need to maintain your VM on your own. (For example: updating OS/Framework update, security patches)
  • Cloud Service: It’s still a VM but is manged by Microsoft so you don’t need to maintain it but still, you might not need a VM to run your backend job.
  • Web Job: It’s like using a shared host. You don’t need to maintain any VM and it got full integration with Azure storage as well. If you are a fan of Azure website, you might like it as well.

Where does it store?

The web job are stored in the following directory.

site\wwwroot\App_Data\jobs\{job type}\{job name}

It’s important to know because we are going to use it later.

Ok. My short note on Azure webjob will end here since it’s not my intention to write about web job in this post. but there are a lot of useful blog posts regarding Azure webjob so I am sure that you can easily google them.

Or, you can read some of my favorite posts below~

Octopus Deploy

Why Octopus? As we have only one production, we don’t need Octopus. I found only one reasons why you will need Octopus Deploy.

That reason is ~

  • Multiple servers deployment: If you have a lot of servers then Octopus tentacle comes in handy. You can easily configure the octopus tentacle on all of your servers and you can deploy it in one shot. That tentacle will take care of synchronizing the deployment to all servers. Cool, huh?

Note: I asked the octopus team just to confirm whether my assumption is correct or not. You can read in this post “What is the main selling point for octopusdeploy?” Yes. my assumption is correct!

In our case, we can actually publish the Azure website directly from CI (Team City) using MSDeploy.

Team City + MS Deploy + Azure Web Site

This is the commandline parameters ~

 /p:Configuration=Release /p:OutputPath=bin
/p:VisualStudioVersion=11.0
/p:DeployOnBuild=True /p:DeployTarget=MSDeployPublish /p:MsDeployServiceUrl=https://{yourazurewebsite-url}:443/msdeploy.axd /p:AllowUntrustedCertificate=True /p:DeployIisAppPath={your-app-pool-name} /p:MSDeployPublishMethod=WMSVC /p:username={your azure website user name from published setting}/p:password={your azure website password from published setting}

Note: You can get the user name and password from the publish profile from your website dashboard.

quick glance

I wrote about MS deploy for publishing website a while back. “WebDeploy 3 – Error in publishing website to Amazon EC2

Anyways, as everyone is talking about Octopus, I thought it might be a good idea to try and get a taste of it.

So I downloaded the Octopus (2.4.7) which includes Octopus Server (x64), Octopus Tentacle (x64) and TeamCity Plugin.

Team City + Octopus Team City Plugin

I installed Octopus plugin in Teamciy by placing the zip file under <TeamCity Data Directory>/plugins. If you are not sure about the <TeamCity Data Directory> then you can check it out in “Administration->Global Setting” page in TeamCity. The default path is C:\ProgramData\JetBrains\TeamCity . Then restart the service to take effect on new plugin installation.

If your installation is working fine then you will see “Octopus Packing” in “MS Build runner” build step or “VS build runner” build step.

Octopus Packing

Note that Octopus has the limitation on the version number and it doesn’t work if your version is just a single number so you will have to change like “1.0.%build.counter%” in “Build number format” in “General Setting”.

After that, you need to enable the nuget feed in Teamcity administration page.

Enable Nuget in TeamCity

That’s all. If you want to view the step by step instruction then you can check this page “TeamCity + Octopus Deployment

Octopus Server

It was pretty easy to install and configure the Octopus server on my server. Good job, Octopus!

Octopus Tentacle

Obviously, I need the server and TeamCity plugins but why Tentacle?

Oh well, Octopus has another limitation beside the build number. It doesn’t work without Tentacle. I think they didn’t think about the deployment scenario where we, developers, don’t have any server. I asked them here (link) to confirm about this. To workaround this, I have to install “Tentacle” on the server that I am hosting the octopus server and configure it as a machine in Octopus’s Environment page.

Octopus Environment

How to connect TeamCity and Octopus? Yes. You need to add the nuget feed that you enabled in Teamcity by using “Add Feed” button in Octopus Library page.

Octopus External Feed1

 

After that, you can create a new project in Octopus and add the steps in “Process” panel.

Note that there are a few different ways as below to deploy the Azure website and web job.

  • FTP Upload
  • Git push
  • MS Deploy

Even thought I am told that there is an Octopus MSDeploy template, I decided to use the FTP upload in my case. (Yeah, I am not a big fan of git push deployment until now. Sorry! )

Here is the default FTP template that you can use when you are adding the step in “Process”

Upload by FTP Template

 

Here is the step details that you need to fill up for FTP template. Octopus FTP Step Details

You can get about the FTP information of your site or webjob from Azure dashboard.

Remember what we said in “Where does it store” section? The web job are stored in the following location so you will have to point to this directory in your “FTP upload” step.

site\wwwroot\App_Data\jobs\{job type}\{job name}

That’s it.

This is how we are using Team City + Octopus for publishing Azure Web Site and WebJob. What is yours?

Amazon EC2 + CloudWatch + New Relic: Monitoring, Alert Notifications

We have been on Ec2 for almost two years but recently, we needed to monitor a few stuffs on server so we did a bit of experiences on that and deployed it a couple of months back. We found that we had a pretty good result so I am going to share it here as usual.

Purpose

To monitor the following things on Amazon Ec2 virtual machine and send the alerts based on some conditions.

  • CPU Utilization
  • Memory Available
  • Disk Space Available
  • Network
  • VM Status

Methods of monitoring

There are a few ways to monitor the instances of EC2 but we chose to use Amazon CloudWatch and New Relic.

  • Cloud Watch
  • New Relic

Amazon Cloud Watch

The reason is simple. We chose it because Amazon provides it. There is no cost for monitoring some basic stuffs (Amazon CloudWatch Pricing) so it fits us well.

Basic Monitoring metrics (at five-minute frequency) for Amazon EC2 instances are free of charge, as are all metrics for Amazon EBS volumes, Elastic Load Balancers, and Amazon RDS DB instances.
New and existing customers also receive 10 metrics (applicable to Detailed Monitoring for Amazon EC2 instances or Custom Metrics), 10 alarms, and 1 million API requests each month at no additional charge.

Amazon EC2 Console – Default Monitoring

This is what you will see when you open up the EC2 console. You can enable the monitoring and create the alarms in this console as well.


Amazon EC2 - Monitoring Page

CloudWatch – Monitoring

Once you have enabled the default monitoring in EC2 console, you will see the following metrics in CloudWatch console.

CloudWatch Console for EC2

 

Of course, this is not what we need. We need to monitor the disk space, memory utilization and etc. So we need to create some custom scripts for that.

AWS SDK for .NET

“AWS SDK for .NET” allows you to create the scripts that can create the custom metric on CloudWatch. You can download the SDK from this link ( http://aws.amazon.com/sdkfornet/). We want to monitor the disk space and memory which doesn’t include in EC2 default monitoring so we need to install this SDK and create the script for that.

Amazon CloudWatch Monitoring Scripts for Microsoft Windows Server

You can roll your own script for monitoring what you want to monitor but it’s always a good idea to google before creating your own because if your requirements is very common then someone might be already created the script for that.

I found a brunch of monitoring scripts from this link ( http://aws.amazon.com/code/7932034889155460 ). This package includes the scripts for the following metrics.

  • Memory Utilization (%)
  • Memory Used (MB)
  • Memory Available (MB)
  • Page File Utilization (%)
  • Page File used (MB)
  • Page File available (MB)
  • Disk Space Utilization (%)
  • Disk Space Used (GB)
  • Disk Space Available (GB)
  • Perfmon Counters.

Amazon Access Key ID and Secret Access Key

To run those scripts, you will probably need to provide the access key and it wasn’t that straight forward to find your own secret access key on Amazon so I captured the screenshots of navigation.

3. Security Credentials 4. Amazon Access Kerys ID and Secret Access Key

 

Bugs in Amazon CloudWatch Monitoring Scripts

You will get the following error when you try to load the Amazon powershell module.

Error Screenshot

5. Default Powersshell Error

Error Message in text (for Google. of course! )

Windows PowerShell
Copyright (C) 2012 Microsoft Corporation. All rights reserved.

Import-Module : The specified module ‘C:\Program Files (x86)\AWS Tools\PowerShell\AWSPowerShell.psd1’ was not loaded
because no valid module file was found in any module directory.
At C:\Users\michael.sync\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1:1 char:1
+ Import-Module “C:\Program Files (x86)\AWS Tools\PowerShell\AWSPowerShell.psd1”
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ResourceUnavailable: (C:\Program File…PowerShell.psd1:String) [Import-Module], FileNot
FoundException
+ FullyQualifiedErrorId : Modules_ModuleNotFound,Microsoft.PowerShell.Commands.ImportModuleCommand

PS C:\Users\michael.sync>

Solution

This issue occurs because of the wrong path in powershell profile. Look the screenshot below for the file name and its location and you can fix the path.

6. PowerShell Error RootCause

Note: $env:psmodulePath is the automatic variable which holds the path used to discover modules. If it’s not set, PowerShell looks in c:\windows\system32\WindowsPowerShell\1.0\modules and MyDocuments\WindowsPowerShell\modules

After fixing the wrong, you should be able to run any script from |Amazon monitoring scripts”.

7. PowerShell

if you can manage to run the scripts that you requires the metrics to be appeared then you can trigger those scripts from windows scheduler every 5 mins or so.

And then you will see new custom metrics as below in CloudWatch dashboard. You can go ahead and create some alerts on new custom metric.

8. Windows Custom Matrix

Here is the scripts that I am using for monitoring the memory utilization and disk space.


.\mon-put-metrics-mem.ps1 -aws_credential_file C:\Users\michael.sync\Downloads\AmazonCloudWatchMonitoringWindows\awscreds.conf -mem_util -mem_used -mem_avail -page_avail -page_used -page_util -memory_units Megabytes

.\mon-put-metrics-disk.ps1 -aws_credential_file C:\Users\michael.sync\Downloads\AmazonCloudWatchMonitoringWindows\awscreds.conf -disk_drive C:, D: -disk_space_util -disk_space_used -disk_space_avail -disk_space_units Gigabytes

OK. I know! It’s not very simple so let’s take a look at third-party stuffs.

New Relic

newrelic_logo-300x74

We chose New Relic (http://newrelic.com/) because they officially supports monitoring for EC2 instance in very simple way.

Installers for New Relic – Servers

All you need is to download the installer and install it on your VM. That’s it!

New Reclic EC2

 

You will get the following dashboard after installing the New Relic installer on your server.

 

New Reclic Chart for EC2

Plugins – New Relic

If you are not happy with default monitoring, you can look at thousands of plugins in “plugin central” or you can even create it on your own. (Note: We didn’t use “Amazon EC2” plugin until now but we are planning to test it in a few weeks time.)

New Reclic PLugin

Look cool and simple? Yes! it is.

New Relic has a few different plan that you can choose. As of now, we are using LIFE (a.k.a. Standard) version so we have only 24 hours for data retention. You can look at their prices in this link http://newrelic.com/pricing for details.

New Reclic Price

 

Last question. Is New Relic service expensive for server?

Here is what we found and I think it seems pretty okay.

Is New Reclic expensive

Are you a EC2, Cloud Watch or New Relic user and got a tip to share? please feel free to drop a comment here. Thanks!

Continous Delivery: full script for deploying Azure WebRole and WorkerRole from Powershell

After I posted Windows Azure Deployment – Problem and Solution #1 and Windows Azure Deployment – Problem and Solution #2 here last week, one of my blog readers emailed me whether or not I have the full script for deploying the WebRole or Worker Role on Windows Azure.

Yes. I still have it but I stopped using webrole and worker role because deploying web role or worker role takes so much time (around 20-40 min) which is unacceptable for us. Anyways,  if 20-40 min deployment time is not your concern then sure, you can use it.

I wrote and tested this script with Azure SDK 2.1 on Windows 2012 server.

You can download it from my github repository as well. Here is the link https://github.com/michaelsync/Michael-Sync-s-blog-sample/blob/master/azure_deploy.ps1


#Modified and simplified version of https://www.windowsazure.com/en-us/develop/net/common-tasks/continuous-delivery/

$thumbprint = &quot;{Your Cert's Thumbprint}&quot;
$myCert = Get-Item cert:\\CurrentUser\My\$thumbprint
$subscriptionId = &quot;{Your Subscription Id}&quot;
$subscriptionName = &quot;{Your Subscription Name}&quot;
$webroleservice = &quot;{Your Web Role Name}&quot;
$workerroleservice = &quot;{Your Worker Role Name}&quot;

$slot = &quot;staging&quot; #staging or production

$package = &quot;{Path of your Azure project}\bin\Release\app.publish\{Your Project}.cspkg&quot;
$configuration = {Path of your Azure project}\bin\Release\app.publish\ServiceConfiguration.Cloud.cscfg&quot;

$timeStampFormat = &quot;g&quot;

Write-Output &quot;Running Azure Imports&quot;
Import-Module &quot;C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\Azure.psd1&quot;
Import-AzurePublishSettingsFile &quot;{Path where you stored your azure setting\Azure.publishsettings&quot;

function Publish(){
 PublishInternal $webroleservice
 PublishInternal $workerroleservice
}

function PublishInternal($service){

Write-Output &quot;Publising&quot;
 Write-Output $service

Set-AzureSubscription -CurrentStorageAccount $service -SubscriptionName $subscriptionName -SubscriptionId $subscriptionId -Certificate $myCert
 Write-Output &quot;Set-AzureSubscription&quot;
 $deploymentLabel = &quot;ContinuousDeploy to $service v%build.number%&quot;

Write-Output $deploymentLabel

$deployment = Get-AzureDeployment -ServiceName $service -Slot $slot -ErrorVariable a -ErrorAction silentlycontinue
 Write-Output a

if ($a[0] -ne $null) {
 Write-Output &quot;$(Get-Date -f $timeStampFormat) - No deployment is detected. Creating a new deployment. &quot;
 }

 if ($deployment.Name -ne $null) {
 #Update deployment inplace (usually faster, cheaper, won't destroy VIP)
 Write-Output &quot;$(Get-Date -f $timeStampFormat) - Deployment exists in $servicename. Upgrading deployment.&quot;
 UpgradeDeployment $service $deploymentLabel
 } else {
 CreateNewDeployment $service $deploymentLabel
 }
}
function CreateNewDeployment($service, $deploymentLabel)
{
 write-progress -id 3 -activity &quot;Creating New Deployment&quot; -Status &quot;In progress&quot;
 Write-Output &quot;$(Get-Date -f $timeStampFormat) - Creating New Deployment: In progress&quot;

$opstat = New-AzureDeployment -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service

$completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot
 $completeDeploymentID = $completeDeployment.deploymentid

write-progress -id 3 -activity &quot;Creating New Deployment&quot; -completed -Status &quot;Complete&quot;
 Write-Output &quot;$(Get-Date -f $timeStampFormat) - Creating New Deployment: Complete, Deployment ID: $completeDeploymentID&quot;
}

function UpgradeDeployment($service, $deploymentLabel)
{
 write-progress -id 3 -activity &quot;Upgrading Deployment&quot; -Status &quot;In progress&quot;
 Write-Output &quot;$(Get-Date -f $timeStampFormat) - Upgrading Deployment: In progress&quot;

# perform Update-Deployment
 $setdeployment = Set-AzureDeployment -Upgrade -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service -Force

$completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot
 $completeDeploymentID = $completeDeployment.deploymentid

write-progress -id 3 -activity &quot;Upgrading Deployment&quot; -completed -Status &quot;Complete&quot;
 Write-Output &quot;$(Get-Date -f $timeStampFormat) - Upgrading Deployment: Complete, Deployment ID: $completeDeploymentID&quot;
}

Write-Output &quot;Create Azure Deployment&quot;
Publish

Here is the log for deploying webrole from Visual Studio.

New Deployment

10:56:32 PM – Warning: There are package validation warnings.
10:56:32 PM – Checking for Remote Desktop certificate…
10:56:33 PM – Uploading Certificates…
10:56:51 PM – Preparing deployment for MvcApplication1.Azure – 10/7/2013 10:56:06 PM with Subscription ID ‘1775dfasa8a81fd58’ using Service Management URL ‘https://management.core.windows.net/’…
10:56:51 PM – Connecting…
10:56:51 PM – Verifying storage account ‘mymvctestx01’…
10:56:52 PM – Uploading Package…
11:04:04 PM – Creating…
11:04:54 PM – Created Deployment ID: 337b850dfa3c4dd38dd35441b5b8e337.
11:04:54 PM – Instance 0 of role MvcApplication1 is stopped
11:04:55 PM – Starting…
11:05:45 PM – Initializing…
11:05:45 PM – Instance 0 of role MvcApplication1 is creating the virtual machine
11:06:18 PM – Instance 0 of role MvcApplication1 is starting the virtual machine
11:07:57 PM – Instance 0 of role MvcApplication1 is busy
11:09:37 PM – Instance 0 of role MvcApplication1 is ready
11:09:37 PM – Created Website URL: http://mymvctestx01.cloudapp.net/
11:09:37 PM – Complete.

Upgrading

11:12:40 PM – Warning: There are package validation warnings.
11:12:40 PM – Checking for Remote Desktop certificate…
11:12:42 PM – Preparing deployment for MvcApplication1.Azure – 10/7/2013 11:12:14 PM with Subscription ID ‘17751asdfs8a81fd58’ using Service Management URL ‘https://management.core.windows.net/’…
11:12:42 PM – Connecting…
11:12:42 PM – Verifying storage account ‘mymvctestx01’…
11:12:44 PM – Uploading Package…
11:20:08 PM – Updating…
11:23:19 PM – Instance 0 of role MvcApplication1 is ready
11:23:21 PM – Starting…
11:23:39 PM – Initializing…
11:23:40 PM – Created Website URL: http://mymvctestx01.cloudapp.net/
11:23:40 PM – Complete.

Generally, it takes around 10 to 20 mins but sometimes, it takes around 40 mins. I am not the only one who experienced this problem. You search about it online. Hope that Azure team will do something about it.