Author Archives: druss

ASP.NET Core + PostgreSQL + Docker + Bitbucket = ♥


Advertisement from Google

aspnet-core-bitbucket-docker-logoHow to build, test and deploy your ASP.NET Core application in a single click (commit & push)? In this article I will answer this question and show you how to configure CI and CD with Docker and Bitbucket.

We will develop simple ASP.NET Core application with single API method to save string value in database. We will use PostgreSQL as a storage for those values. All code will be hosted in bitbucket git repository and we will configure Bitbucket pipelines to build our application, create docker image and push it to docker hub every time we push code to the remote repository. After our image has been pushed to Docker Hub it will trigger webhook on our “production” server which will pull uploaded image from Docker Hub and restart docker-composer with new image

Docker and docker-compose

Let’s start with some basic tools we going to use in this article. Those who are already familiar with docker and docker-compose can skip directly to the next chapter.

What is docker? Here is an official answer. And simple one for those who never worked with containers before but has experience with virtual machines:

docker container – lightweight “virtual machine”

docker image – initial snapshot of the “vm”

Good explanation from Stack Overflow about difference between container and image: “An instance of an image is called a container. You have an image, which is a set of layers as you describe. If you start this image, you have a running container of this image. You can have many running containers of the same image.” Thomas Uhrig

Docker Hub – cloud based registry of docker images. You can create your own image and then push it to docker hub (similar to github for your code)

docker cli tools – set of tools to manage images and containers, as well to pull and push images from docker hub and do many other things

Dockerfile – file that contains instruction to create an image

docker-compose – tool to define and run multiple containers as a single application

ASP.NET Core application

Create WebAPI application:

mkdir test
cd test 
dotnet new webapi

Add PostgreSQL to your project (edit test.csproj file):

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <Folder Include="wwwroot\" />
  </ItemGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.2" />
    <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="1.1.1" />
	<PackageReference Include="Microsoft.EntityFrameworkCore" Version="1.1.1" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="1.1.0" />
	<PackageReference Include="Npgsql.EntityFrameworkCore.PostgreSQL" Version="1.1.0" />
    <PackageReference Include="Npgsql.EntityFrameworkCore.PostgreSQL.Design" Version="1.1.0" />
  </ItemGroup>

</Project>

Add AppDbContext.cs and Value.cs:

using Microsoft.EntityFrameworkCore;

namespace test
{
    public class AppDbContext : DbContext
    {
        public AppDbContext(DbContextOptions<AppDbContext> options) :base(options)
        {
        }

        public DbSet<Value> Values { get; set; }

        protected override void OnModelCreating(ModelBuilder builder)
        {
        }
    }
}
namespace test
{
    public class Value
    {
        public int Id { get; set; }
        public string Date { get; set; }
    }
}

Edit Startup.cs

public void ConfigureServices(IServiceCollection services)
{
	// Add framework services.
	services.AddMvc();
	
	var sqlConnectionString = Configuration.GetConnectionString("DataAccessPostgreSqlProvider");

	services.AddDbContext<AppDbContext>(options =>
		options.UseNpgsql(sqlConnectionString)
	);
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	loggerFactory.AddConsole(Configuration.GetSection("Logging"));
	loggerFactory.AddDebug();

	app.UseMvc();
	
	using (var context = app.ApplicationServices.GetService(typeof(AppDbContext)) as AppDbContext)
	{
		context.Database.Migrate();
		// Other db initialization code.
	}
}

And now, specify connection string in application.json

{
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Warning"
    }
  },
  "ConnectionStrings": {
    "DataAccessPostgreSqlProvider": "User ID=test;Password=test;Host=testpostgres;Port=5432;Database=test;Pooling=true;"
  }
}

You should remember Host, ID, Password (Database should be the same as ID), we will use those values during postgresql container configuration.

Restore, build and publish:

dotnet restore test.csproj
dotnet build test.csproj
dotnet publish -c Release -o publish_output test.csproj

If you try to start it right now, you will get error that postgresql port is not reachable.

If you have postgresql installed locally, you can change Host, ID, Password to your local one, and run it again to test.

Build Docker image

Now we have application artifacts in a publish_output folder, it is time to build our docker image.

Create Dockerfile file in project root:


Advertisement from Google

FROM microsoft/aspnetcore:1.1
EXPOSE 80
COPY publish_output .
ENTRYPOINT ["dotnet", "test.dll"]

Here we define that our image should be based on microsoft/aspnetcore:1.1 from Docker Hub, then we say that we expose port 80, copy our application to the root of container and define entry point (script that will be executed when we start our container)

You can already test it by running:

docker build -t test-image .

This command will create image from Dockerfile with test-image name

You can run it:

docker run test-image

Create docker-compose.yml

In this file we will describe dependencies between our application image and official postgres image

version: '2'

services:
  testpostgres:
     image: postgres
     restart: always
     environment:
         POSTGRES_USER: test
         POSTGRES_PASSWORD: test
     volumes:
       - pgdata:/var/lib/postgresql/data
  testapp:
    image: testapp
    restart: always
    build:
      context: .
      dockerfile: Dockerfile
    ports:
       - 5000:80
    depends_on:
       - "testpostgres"

volumes:
  pgdata:

In this file we describe two services, their parameters and dependencies between them. As we can see we use standard postgres image from Docker Hub and pass some parameters. Service name should be the same as specified in connection string, user and password as well. Then we specify docker volume – persistent data storage for our postgres container.

In second part we define our application service by specifying which Dockerfile composer should use during build and what ports we forward from host to container.

Our postgres instance will be reachable by “service name”, so from our application we can connect via testpostgres to the database server.

To build service:

docker-compose build

And to run it locally:

docker-compose up -d

Configure CI

Create account and repository on Docker Hub, for example user name will be username and repository name testapp

Now you should enable pipelines on bitbucket and create bitbucket-pipelines.yml in the root of your repo:

image: microsoft/aspnetcore-build:1.0-1.1

pipelines:
  default:
    - step:
        script: # Modify the commands below to build your repository.
          - dotnet restore test.csproj
          - dotnet buildtest.csproj
          - dotnet publish -c Release -v n -o ./publish_output test.csproj
          - docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
          - docker build -t username/testapp:dev .
          - docker push username/test:dev

options:
  docker: true

Here we have defined where to build our application (inside microsoft/aspnetcore-build:1.0-1.1 image) and what to do during build (script section)

As you can see, with last three steps we connect to docker hub, build our image and push it to remote repo.

Run on remote server

On our “production server” we can create similar docker-compose file to use images from Docker Hub:

version: '2'

services:
  testpostgres:
     image: postgres
     restart: always
     environment:
         POSTGRES_USER: test
         POSTGRES_PASSWORD: test
     volumes:
       - pgdata:/var/lib/postgresql/data
  testapp:
    image: username/testapp:dev
    restart: always
    ports:
       - 5000:80
    depends_on:
       - "testpostgres"

volumes:
  pgdata:

As you can see it’s almost the same but we have removed build section and renamed image property, so we will use previously pushed image from Docker Hub.

Run:

 

docker-compose pull
docker-compose up -d

Go to http://localhost:5000/api/Values

Done!

Does anyone read LinkedIn?

LinkedIn is the most popular business-oriented social network. A lot of us have an account there, but not a lot write something there. Most of the time you can see posts from recruiters about new “awesome” position in “the best” company in the world, but almost no articles about technical aspects of the work.

I have asked myself, is it worth to share my “IT” related articles on LinkedIn, does anyone gonna read them?

So, I made a post there ten days ago: “Hi, Guys! Can you somehow let me know if you see this post (like or comment). I want to see how many people are reading this feed. Just a small experiment. Thanks!

After 10 days I have 10014 views, 131 likes and 13 comments, and that’s with 1140 connections.

Here is detailed statistic about views:

Developers is the most common group in my network, so it is clear that they see this post the most. Recruiters on the second place 🙂

Currently, I am living in The Netherlands, close to Amsterdam, so a lot of views came from people living in the same area. I am surprised that Ukraine is not in the list.

But in the next screen you will see only Ukrainian companies. The biggest outsourcing companies in Ukraine, Luxoft and Ciklum are missing….

And of course the most views came from my 2nd degree network:

I think that LinkedIn is good enough to share work related articles. You will get quite a lot of views and most of them will be from people who share your work-related interests.

P.S. Join my network: https://www.linkedin.com/in/ivan-derevianko-5a237239/

Enable bash on Windows 10

winbdows_bashFinally! You can now use bash and almost any linux program on your Windows 10 machine. You do not need cygwin or MinGW anymore! This will give you opportunity to use rich variety of tools available only on linux. For example wrk – a great HTTP benchmarking tool which I plan to use for a new ASP.NET Core 1.1 benchmark.

Linux’s user space available from Windows 10 version 1607. But it’s disabled by default.

To enable it you should:

  1. Go to Settings -> Updates & security -> for developers and switch to Developer mode
  2. Go to Control Panel -> Programs and Features -> Turn Windows features on or off -> enable Windows subsystem for Linux (Beta)
  3. Restart computer
  4. Run bash!

 

Nginx Server Blocks (Virtual Hosts) Example Template

nginxHere is an example of the nginx server block (Virtual Hosts) which you can use to host multiple web sites on the same server.

Just replace example.com with your own domain name. Do not forget to create all required folders and sent right permissions.

Copy one of those templates to the /etc/nginx/sites-available/example.com

Proxy example

# Ivan Derevianko aka druss http://druss.co
# Force without domain domain without www
server {
    server_name  www.example.com;
    listen 80;
    rewrite ^(.*) http://example.com$1 permanent;
}

# Main server
server {
    server_name  example.com;
    listen 80;
    root /var/example.com/www;
    access_log /var/example.com/logs/access.log;
    error_log /var/example.com/logs/error.log;

    location / {
        proxy_pass http://localhost:5000;
	proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection keep-alive;
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

Php-fpm example

# Ivan Derevianko aka druss http://druss.co
# Force without domain domain without www
server {
    server_name  www.example.com;
    listen 80;
    rewrite ^(.*) http://example.com$1 permanent;
}

# Main server
server {
    server_name  example.com;
    listen 80;
    root /var/example.com/www;
    access_log /var/example.com/logs/access.log;
    error_log /var/example.com/logs/error.log;

    client_max_body_size 128M;

    index index.php index.html index.htm;
	
    location / {
        try_files $uri $uri/ /index.php$is_args$args;
    }
	
    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass unix:/run/php/php7.0-fpm.sock;
    }
}

Deploy and run .NET Core application without installed runtime. Self-contained applications.

net-core-self-contained.NET Core framework provides one very useful feature – Self-contained application. You don’t need to install any .net runtime on your computer, you can just copy your application to the target host and run it. Furthermore, you can run it on any platform (Windows, Linux, OSX)!

When you create a self-contained application, after publishing, you will see the whole .net runtime next to your application.

There are some advantages:

  • You do not need to have installed .NET Core on a target machine
  • You can run applications with different .NET Core versions at the same time

But also, disadvantages:

  • Bigger application size (for empty web API project around 50 MB)
  • Limited target runtimes list (see list here)

Let’s create simple web API application and deploy it to the clean Ubuntu 14.04 machine!

Developer prerequisites

  • Installed .NET Core
  • Editor (Visual Studio Code, Visual Studio 2015, Rider, notepad, etc…)
  • sftp client (winscp)
  • ssh client (putty)

Application

Create empty application

dotnet new

Edit project.json:

{
  "version": "1.0.0-*",
  "buildOptions": {
    "emitEntryPoint": true
  },
  
  "dependencies": {
    "Microsoft.NETCore.App": "1.0.0",
    "Microsoft.AspNetCore.Mvc": "1.0.0",
    "Microsoft.AspNetCore.Server.Kestrel": "1.0.0",
    "Microsoft.Extensions.Logging": "1.0.0",
    "Microsoft.Extensions.Logging.Console": "1.0.0",
    "Microsoft.Extensions.Logging.Debug": "1.0.0"
  },
  
  "frameworks": {
    "netcoreapp1.0": {
      "imports": "dnxcore50"
    }
  },
  
  "runtimeOptions": {
    "configProperties": {
      "System.GC.Server": true
    }
  },
  
  "runtimes": {
      "win10-x64": {},
      "ubuntu.14.04-x64": {}
  }
}

An important part is runtimes section. You can find supported runtimes here: RID catalog

Also, you should not include  “type”: “platform” option in the following section

"Microsoft.NETCore.App": "1.0.0",

Now, edit Program.cs

using Microsoft.AspNetCore.Hosting;

namespace WebServiceApplication
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var host = new WebHostBuilder()
                .UseKestrel()
                .UseUrls("http://+:5000")
                .UseStartup<Startup>()
                .Build();

            host.Run();
        }
    }
}

In this file, we have configured our hosting environment, which is Kestrel (cross-platform web server) and specified which IP and port it will use.

Create Startup.cs

using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;

namespace WebServiceApplication
{
    public class Startup
    {
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();
        }

        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            loggerFactory.AddConsole();
            loggerFactory.AddDebug();

            app.UseMvc(routes =>
            {
                routes.MapRoute(
                    name: "default",
                    template: "{controller}/{action}/{id?}");
            });
        }
    }
}

Here we enabled MVC and Logging in our application.

Now the logic, LuckController.cs

using Microsoft.AspNetCore.Mvc;

namespace WebServiceApplication
{
    public class LuckController : Controller
    {
        public IActionResult Try()
	{
		return Ok(42);
	}
    }
}

Now let’s restore all required packages. In command line:

dotnet restore

And run application

dotnet run

If everything is ok you can go to http://localhost:5000/Luck/Try  and you will see 42.

Publish and run application on a remote machine

To create self-contained application run following command:

dotnet publish --configuration Release --runtime ubuntu.14.04-x64

where –runtime value is one of the runtimes elements from project.json

You will see folder where you can find self-contained application

Archive and copy folder to the target machine.

Extract archive, navigate to the extracted folder and make file executable:

chmod +x test

Run application:

./test

Your first self-contained ASP.NET Core application is ready!

Here is more information about different .NET Core application types

Speed up Selenium WebDriver’s page parsing time

selenium-csqueryIf you are using Selenium WebDriver as a web crawler and thinking that it’s too slow, welcome inside!

In this article, we will see how to make page parsing time around 50 times faster.

As an example, I will parse comments from another article from this blog. I will first parse it using default WebDriver API (FindElement… methods) and then will compare it to CsQuery

Here is WebDriver parsing code:

var driver = new ChromeDriver();
driver.Navigate().GoToUrl("http://druss.co/2014/07/fixed-setup-was-unable-to-create-a-new-system-partition-or-locate-an-existing-system-partition-during-installing-windows-8-18-7-vista-etc-from-usb/");

// start stopwatch
var comments = driver.FindElementsByCssSelector("li.comment");
foreach (var comment in comments)
{
    var parserComment = new Comment();

    parserComment.Author = comment.FindElement(By.CssSelector("cite.fn")).Text;
    parserComment.Date = comment.FindElement(By.TagName("time")).Text;
    parserComment.Content = comment.FindElement(By.ClassName("comment-content")).Text;
}
// stop stopwatch

And this is tooooo slow! I have around 225 comments there, and it took me 26 seconds to parse their structure.  All WebDriver’s search methods are very slow!, I think this is because it does not cache page’s content and always make a request to a browser.

The other approach I gonna test is a bit different. First, I will get HTML code for a comments block and then parse it using CsQuery library (fast, open-source, .net jQuery implementation)

var driver = new ChromeDriver();
driver.Navigate().GoToUrl("http://druss.co/2014/07/fixed-setup-was-unable-to-create-a-new-system-partition-or-locate-an-existing-system-partition-during-installing-windows-8-18-7-vista-etc-from-usb/");
// start stopwatch
var html = driver.FindElementById("comments").GetAttribute("innerHTML");
CQ dom = html;
var comments = dom["li.comment"];

foreach (var comment in comments.Select(x => x.Cq()))
{
 var parserComment = new Comment();
 parserComment.Author = comment["cite.fn"].Text();
 parserComment.Date = comment["time"].Text();
 parserComment.Content = comment[".comment-content"].Text();
}
// stop stopwatch

This code took only 600 milliseconds

You could also use other approaches such as Html Agility Pack, RegEx, IndexOf(), etc.

The main idea is to wait until page content is loaded, load HTML using FindElement* or  similar method and then parse it using more performance friendly tools.

[Mono] Selenium with headless ChomeDriver on Ubuntu Server

selenium-chromedriver-headless-monoIf you want to run C# application (mono) with Selenium ChomeDriver on Ubuntu Server 16.04 in headless mode, you definitely should read this article. We will use Xvfb as X server, this will let us emulate “headless” mode because Xvfb performs all graphical operations in memory without showing any screen output.

Install mono

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
echo "deb http://download.mono-project.com/repo/debian wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list
sudo apt-get update
sudo apt-get install mono-complete
sudo apt-get install referenceassemblies-pcl

(official docs)

Install Google Chrome

wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add - sudo sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
sudo apt-get update 
sudo apt-get install google-chrome-stable

Install Xvfb

apt-get install xvfb imagemagick

Run ChromeDriver in headless mode

We will use following application for test:

class Program
{
    static void Main(string[] args)
    {
        var driver = new ChromeDriver();
        driver.Navigate().GoToUrl("http://druss.co");

        Console.WriteLine("Started. Press Enter to exit");
        Console.ReadLine();
    }
}
chmod +x chromedriver

For example, you compiled your app to /home/druss/Selenium/TestChromeDriver.exe

xvfb-run --server-args='-screen 0, 1920x1080x24' --auth-file=/home/druss/.Xauth -a mono /home/druss/Selenium/TestChromeDriver.exe

You should see an output from the program.

To run your program in background mode:

xvfb-run --server-args='-screen 0, 1920x1080x24' --auth-file=/home/druss/.Xauth -a mono /home/druss/Selenium/TestChromeDriver.exe &> /dev/null &

Make screenshots

You can make a screenshot of the running process. Every time you start xvfb-run with -a parameter it searches for free display (starts from 99) and uses it as an output.

To access this data you should set one variable:

--auth-file=/home/druss/.Xauth

After this you can take screenshot:

DISPLAY=:99 import -window root testChromeDriver.png

ZohoPeopleTimeLogger v1.4 – Smart filtering

ZohoPeopleTimeLoggerChanges:

  • Show vacations only for currently logged in user (Previously, if you have access to other user’s leave tracker you will see his days off as your vacation)
  • Show only days off with Holiday and Sick type (Exclude 2hours short leave and others)
  • Display leave type in calendar (Before was always Vacation, now Holiday or Sick)

Download (GitHub)

ZohoPeopleClient

This C# library was also updated to v1.0.2. There are two new methods available in Fetch Record API:

public enum SearchColumn
{
     EMPLOYEEID,
     EMPLOYEEMAILALIAS
}

Task<List<dynamic>> IFetchRecordApi.GetByFormAsync(string formName, SearchColumn searchColumn, string searchValue);
Task<List<dynamic>> IFetchRecordApi.GetByViewAsync(string viewName, SearchColumn searchColumn, string searchValue);

GitHub

 

C# 7.0 Pattern Matching. Part1

C# 7.0 Pattern MatchingGreat news! You can already try new C# 7.0 features. All you need is Visual Studio 15 Preview. Let’s start!

Today we are going to talk about Pattern Matching and will look under the hood of this nice feature. Unfortunately, it is only partly available in VS 15. So we need to wait until next release. Or you can try to get latest Roslyn from GitHub.

is operator

The is operator is extended to test an expression against a pattern.

With that you can replace this:

object obj = "Hello, World!";
var str = obj as string;
if (str != null)
{
    Console.WriteLine(str);
}

With shorter (and safer) version:

object obj = "Hello, World!";
if (obj is string str)
{
    Console.WriteLine(str);
}

It is safer because str variable is visible only in the if scope

Produced IL code is identical:

IL_0000:  ldstr      "Hello, Wolrd!"
IL_0005:  isinst     [mscorlib]System.String
IL_000a:  stloc.0
IL_000b:  ldloc.0
IL_000c:  brfalse.s  IL_0014

IL_000e:  ldloc.0
IL_000f:  call       void [mscorlib]System.Console::WriteLine(string)
IL_0014:  ret

You can also use pattern matching to match value type:

object obj = 1;
if (obj is int num)
{
    Console.WriteLine(num);
}

But you should be aware that internally it will be unboxed to Nullable<T> type. Here is IL code:

.locals init ([0] int32 num,
         [1] valuetype [mscorlib]System.Nullable`1<int32> V_1)
IL_0000:  ldc.i4.1
IL_0001:  box        [mscorlib]System.Int32
IL_0006:  isinst     valuetype [mscorlib]System.Nullable`1<int32>
IL_000b:  unbox.any  valuetype [mscorlib]System.Nullable`1<int32>
IL_0010:  stloc.1
IL_0011:  ldloca.s   V_1
IL_0013:  call       instance !0 valuetype [mscorlib]System.Nullable`1<int32>::GetValueOrDefault()
IL_0018:  stloc.0
IL_0019:  ldloca.s   V_1
IL_001b:  call       instance bool valuetype [mscorlib]System.Nullable`1<int32>::get_HasValue()
IL_0020:  brfalse.s  IL_0028

IL_0022:  ldloc.0
IL_0023:  call       void [mscorlib]System.Console::WriteLine(int32)
IL_0028:  ret

It is equivalent to following C# code:

object obj = 1;
Nullable<int> V_1 = obj as Nullable<int>;
int num = V_1.GetValueOrDefault();
if (V_1.HasValue)
{
    Console.WriteLine(num);
}

Even though it is unboxed to Nullable<int> type, in if scope you will have num variable of System.Int type.

You can also use var in pattern matching. A match to the var pattern is always succeeded.

object obj = "test";
if (obj is var str)
{
    Console.WriteLine(str);
}

As well as * pattern:

object obj = "test";
if (obj is *)
{
    Console.WriteLine(true);
}

There is constant pattern match:

object obj = 1;
if (obj is 1)
{
    Console.WriteLine(true);
}

Which will be replaced with call to the object.Equals method

object obj = 1;
if (object.Equals(obj, 1))
{
 Console.WriteLine(true);
}

Unfortunately, User-defined operator is is not implemented yet. So recursive pattern matching is not working.

That’s it for today. In the next part, we will talk about pattern matching in switch statement:

object obj = "String";

switch (obj)
{
    case string s:
        Console.WriteLine(s + "test");
        break;
    case int i:
        Console.WriteLine(i + 10);
        break;
}

C# 7.0 Local Functions

C# 7.0 Local FunctionsC# 7.0 is coming! Even 6.0 is not released yet, you can already try new C# 7.0 features. To do that you need:

  • Visual Studio 15 preview
  • Set __DEMO__ and __DEMO_EXPERIMENTAL__ as Conditional compilation symbol in project settings.

Not all of the new features available in current preview, but you can already play with some of them.

But today we gonna look closer at

Local Functions

private static void Main(string[] args)
{
    int LocalFunction(int arg)
    {
        return 42 * arg;
    }

    Console.WriteLine(LocalFunction(1));
}

or shorter:

private static void Main(string[] args)
{
    int LocalFunction(int arg) => 42 * arg;

    Console.WriteLine(LocalFunction(1));
}

or even like that:

private static void Main(string[] args)
{
    int localVar = 42;
    int LocalFunction(int arg)
    {
        return localVar * arg;
    }

    Console.WriteLine(LocalFunction(1));
}

Do you like it? I do!

You can define local method in any scope and it will be available only in that scope (and in all inner scopes as well, the same as local variable).

You will have access to all outer scope’s variables and methods (see below how it is implemented).

Benefits

  • Small helper methods required only in some scope (usually with LINQ expression)
  • No GC allocations comparing to anonymous methods and lambda expressions
  • You can pass ref and out parameters (comparing to anonymous methods and lambda expressions)

Let’s look closer what is under the hood.

IL code

After disassembling first example with ildasm.exe I found following:

  .method private hidebysig static void  Main(string[] args) cil managed
  {
    .entrypoint
    // Code size       12 (0xc)
    .maxstack  8
    IL_0000:  ldc.i4.1
    IL_0001:  call       int32 LocalFunctionsTest.Program::'<Main>g__LocalFunction0_0'(int32)
    IL_0006:  call       void [mscorlib]System.Console::WriteLine(int32)
    IL_000b:  ret
  } // end of method Program::Main

  .method assembly hidebysig static int32 
          '<Main>g__LocalFunction0_0'(int32 arg) cil managed
  {
    .custom instance void [mscorlib]System.Runtime.CompilerServices.CompilerGeneratedAttribute::.ctor() = ( 01 00 00 00 ) 
    // Code size       2 (0x2)
    .maxstack  8
    IL_0000:  ldarg.0
    IL_0001:  ret
  } // end of method Program::'<Main>g__LocalFunction0_0'

As you can see that LocalFunction is a regular instance method with specific name ‘<Main>g__LocalFunction0_0’ and CompilerGenerated attribute. Here is C# equivalent:

private static void Main(string[] args)
{
    Console.WriteLine(LocalFunction(1));
}
        
private static int LocalFunction(int arg)
{
    return 1 * arg;
}

Because LocalFunction was declared in a static context, the method is also static. If you define it inside instance method it will also be comliped to instance method.

The third example (using outer scope variable) will show us a bit different IL:

.class public auto ansi beforefieldinit LocalFunctionsTest.Program
       extends [mscorlib]System.Object
{
  .class abstract auto ansi sealed nested private beforefieldinit '<>c__DisplayClass0_0'
         extends [mscorlib]System.ValueType
  {
    .custom instance void [mscorlib]System.Runtime.CompilerServices.CompilerGeneratedAttribute::.ctor() = ( 01 00 00 00 ) 
    .field public int32 localVar
  } // end of class '<>c__DisplayClass0_0'

  .method private hidebysig static void  Main(string[] args) cil managed
  {
    .entrypoint
    // Code size       31 (0x1f)
    .maxstack  2
    .locals init ([0] valuetype LocalFunctionsTest.Program/'<>c__DisplayClass0_0' 'CS$<>8__locals0')
    IL_0000:  ldloca.s   'CS$<>8__locals0'
    IL_0002:  initobj    LocalFunctionsTest.Program/'<>c__DisplayClass0_0'
    IL_0008:  ldloca.s   'CS$<>8__locals0'
    IL_000a:  ldc.i4.s   42
    IL_000c:  stfld      int32 LocalFunctionsTest.Program/'<>c__DisplayClass0_0'::localVar
    IL_0011:  ldc.i4.1
    IL_0012:  ldloca.s   'CS$<>8__locals0'
    IL_0014:  call       int32 LocalFunctionsTest.Program::'<Main>g__LocalFunction0_0'(int32,
                                                                                       valuetype LocalFunctionsTest.Program/'<>c__DisplayClass0_0'&)
    IL_0019:  call       void [mscorlib]System.Console::WriteLine(int32)
    IL_001e:  ret
  } // end of method Program::Main

  .method public hidebysig specialname rtspecialname 
          instance void  .ctor() cil managed
  {
    // Code size       7 (0x7)
    .maxstack  8
    IL_0000:  ldarg.0
    IL_0001:  call       instance void [mscorlib]System.Object::.ctor()
    IL_0006:  ret
  } // end of method Program::.ctor

  .method assembly hidebysig static int32 
          '<Main>g__LocalFunction0_0'(int32 arg,
                                      valuetype LocalFunctionsTest.Program/'<>c__DisplayClass0_0'& A_1) cil managed
  {
    .custom instance void [mscorlib]System.Runtime.CompilerServices.CompilerGeneratedAttribute::.ctor() = ( 01 00 00 00 ) 
    // Code size       9 (0x9)
    .maxstack  8
    IL_0000:  ldarg.1
    IL_0001:  ldfld      int32 LocalFunctionsTest.Program/'<>c__DisplayClass0_0'::localVar
    IL_0006:  ldarg.0
    IL_0007:  mul
    IL_0008:  ret
  } // end of method Program::'<Main>g__LocalFunction0_0'

} // end of class LocalFunctionsTest.Program

As you can see, compiler created an inner struct and uses it to pass “outer scope” variable(s) to the local function as an implicit parameter. Here is C# equivalent:

public class Program
{
    private struct c__DisplayClass0_0
    {
        public int localVar;
    }

    private static void Main(string[] args)
    {
        int localVar = 42;
        var implicitArg = new c__DisplayClass0_0();
        implicitArg.localVar = localVar;

        Console.WriteLine(LocalFunction(1, ref implicitArg));
    }

    private static int LocalFunction(int arg, ref c__DisplayClass0_0 implicitArg)
    {
        return implicitArg.localVar * arg;
    }
}

That is why you cannot declare variables in local function which has the same name as a variable declared in outer scope.

In fact, it does not matter in what scope it was declared. It can be for loop, if, while, curly brackets {} or other local function. For each outer scope will be created a structure to hold all scope related variables.