Author Archives: druss

How-to Fix “Http failure during parsing” Error

Http failure during parsingIn my frontend application, I was making requests to the backend API, but in some cases, I was getting the following error: Http failure during parsing for http://localhost/api/get-data. If you are facing the same problem welcome on board!

Reasons for”Http failure during parsing for” error

Invalid JSON format

One of the possible reasons is that you have an incorrect format of your JSON. Please go to DevTools -> Network tab and copy the response. Validate it via JSON validator

Wrong ResponseType

Depending on the Content-Type header, your browser can try to parse the response to a JSON object. You can try setting the responseType option before making the request to avoid that:

return`${this.endpoint}/account/login`,payload, { ...options, responseType: 'text' })

Invisible \0 character (null character) in the response

If you already validated JSON from DevTools in the JSON validation tool and setting the responseType didn’t help, try this one. Get the json from your backend (in case of C# you can put a breakpoint in VS and copy result JSON before sending the response back) and see if there are any \0 characters. In my case, one of the fields contains an exception stack trace from the outside service. Sometimes there was \0 character in the middle of the stack trace. DevTools in Chrome automatically ignore this character but HttpClient is having a hard time parting JSON with null character in the middle.

To fix this in C# you can do something like this:

result.FieldWithStackTrace = result.FieldWithStackTrace?.Replace("\0", string.Empty);

Blazor – It’s Time to Forget JavaScript

We are not gonna talk here why we should forget JavaScript as a nightmare because it’s obvious! Just take a look at this code and tell me what will be the result:

[] + [] = 
{} + [] = 
[] + {} =

You can find answers in this amazing talk called “Wat”

Time to Forget JS

Fortunately, with the creation of WebAssembly, we can use other languages to write web applications. You can compile your C, C++, or Rust source code to WebAssembly byte code and run it in the browser. As the next step in web development, engineers from Microsoft have compiled mono runtime to WebAssembly and ASP.NET Core Team is now working on an amazing framework called Blazor. This platform gives you the possibility to write component-based web applications using C# and VisualStudio (VS Code is also supported).

Finally, no more stupid runtime errors because you are using an object of the wrong type (we all love C# for static typing). We can finally share code (assemblies) between client and server. You can define enums, validation logic, some business rules, etc. once and use them everywhere. And of course, it’s gonna be the same language across the whole system.

To be clear, there are two “hosting models” for Blazor, client-side (runs in the browser on a WebAssembly-based .NET runtime) and server-side (runs on the backend, the client will get HTML and will interact with the server using SignalR).


In this article, I’m gonna talk about client-side Blazor. But the main concepts, the app and component models, are the same.

Component Architecture

As you can see from the image, on top of .NET runtime we have boxes called “Razor Components”. What is Razor Component?

A component is an element of UI, such as a page, dialog, form, or even a button that defines rendering logic, can react to user events, and can be nested and reused. It’s written in Razor markup (mix of HTML and C#)  and has .razor file extension. Here is an example of dialog (Dialog.razor, from official documentation):



    <button @onclick="@OnYes">Yes!</button>

@code {
    private string Title { get; set; }

    private RenderFragment ChildContent { get; set; }

    private void OnYes()
        Console.WriteLine("Write to the console in C#! 'Yes' button was selected.");

Now you can use this component in other parts of your application and define your own content via ChildContent and Title properties. Let’s see it how we do that in Index.razor):

@page "/"

<h1>Hello, world!</h1>

Welcome to your

<Dialog Title="Blazor">
    Do you want to <i>learn more</i> about Blazor? Then go to the <a href="">official documentation.</a>

This example also shows us one of the ways we can define routing in the app. You just use @page directive and pass the relative path to the page.

JavaScript Interop

If you want to access Browser APIs (e.g. local storage, history, etc.) or use your favorite JS library in the component, you can easily interoperate with JavaScript. Components can call JS code, and JavaScript code can call into C#.

Browser Support

Since WebAssembly is working in four major browsers, latest versions of Edge, Firefox, Chrome (including Android), and Safari (including iOS) are supported by client-side Blazor.

Awesome! Can I have two?

Unfortunately, Blazor is still in preview and there are some rough edges.

It is slow in number crunching, even though rendering is quite fast if you compare some basic operations to JS you will see 20 to 100 times slowdown. This is because mono runtime is running on WebAssemble, so basically one virtual machine is running inside another.

we need to go deeper

ASP.NET team is working on Ahead of Time (AOT) compilation for Blazor, so instead of compiling C# to IL, it will be compiled directly to WebAssembly bytecode.

As well, the size of the bundle is quite big. You need to download around 2.6 MB of data when you load application for the first time. Of course it gets cached in the browser, but still, 2.6 MB is quite a lot for an empty app. It should be better if you use: /p:PublishTrimmed=true switch while publishing.

There is no live reload yet, so if you change anything in your component you need to rebuild it and then reload the page.

Client-side Blazor will not be released until the .NET 5, and server-side Blazor will be available as part of .NET Core 3.0 release. Let’s see what it brings us!

WebAssembly – The Next Step in Web Development

In the beginning, God created the Bit and the Byte. And from those he created the Word.

And there were two Bytes in the Word, and nothing else existed. And God separated the One from the Zero, and he saw it was good.

Assembly language

A long, long time ago, developers were using machine codes to program computers. It was hard to keep all of them in mind, easy to make a mistake, and almost impossible to read.  After struggling with machine codes, they created mnemonic codes to refer to machine code instructions aka Assembly language. The assembler was responsible for translating assembly language to machine codes.


Higher-level abstractions

The complexity of the systems kept growing and so higher-level programming languages were created to hide this complexity behind abstractions. The first widely used high-level general-purpose programming language was FORTRAN, invented in IBM in 1954. Then we got BASIC, C, C++, and many more. We are still using compilers to translate high-level languages into machine codes. The problem is, that compiler produces architecture-specific code, so if you have compiled your C program to x86 architecture, you cannot run it on AMR processor (due to a different set of available CPU instructions).


To solve this problem, developers have created another set of abstractions – bytecode (aka intermediate language) and virtual machines. Bytecode is the instruction set of the virtual machine, that is architecture-independent. Translating bytecode to a CPU specific instructions is a responsibility of a virtual machine. In this way, we can write once, run anywhere! C# and Java are good examples of this concept.


The Internet Era

In the 90s the Internet was born, first web pages, first browsers, first dynamic pages. In May 1995, Brendan Eich wrote the prototype of JavaScript language in 10 days (we all know the consequences of that) and it was shipped in Netscape Navigator. Since then, we are living in the Internet Era.


We know that JavaScript is not the most efficient nor fastest programming language, despite the effort engineers in Google, Mozilla, Apple, Microsoft, and other companies put into JS engines we still see a lot to improve there. Another problem we have – there is only one language to develop client-side web applications. Yeah, I know about TypeScript, but is still traspilies in JavaScript code before it can be executed in the browser.

We need to go deeper – Web Assembly

we need to go deeper

To solve these problems, W3C together with guys from Mozilla, Google, Microsoft, and Apple created a specification for WebAssembly –  an open standard for binary code and textual assembly language to enable high-performance applications on web pages. Now you can compile your C code (as well as C++ and Rust) into WebAssembly bytecode, and execute it inside the virtual machine running in the browser. The VM is designed to be faster to parse than JavaScript, as well as faster to execute and to enable very compact code representation.


How does it work?

For now, all interactions with WebAssembly code is done via JavaScript code, so when you write WebAssembly code, you should define which functions you want to export and import so later you can call them.


Here we import function i and export function e:

;; simple.wasm
  (func $i (import "imports" "i") (param i32))
  (func (export "e")
    i32.const 42
    call $i))

After the page is loaded in the browser, you can load the .wasm file as a regular resource and do the following to call the wasm function:

  • Get the .wasm bytes into a typed array or ArrayBuffer
  • Compile the bytes into a WebAssembly.Module
  • Instantiate the WebAssembly.Module with imports to get the callable exports
  • Call function from the module

Another important concept is Linear memory – a low-overhead “shared memory” between JS and WebAssembly. You can pass the data to do some calculation on a WebAssebmly side and pass the result back to display it on the web page.


PSPDFKit has created a real-world benchmark based on PSPDFKit for Web. In this benchmark, we will compare the real-world code performance in WASM and JavaScript in three major browsers: Chrome 75, Firefox 68 and Edge 44. Lower result is better:


As you can see, WebAssembly is showing better score in all browsers, the biggest difference between JS and WASM is in Firefox, Edge is the slowest of all.

Browser support, limitations, and future plans

At the moment, the MVP version of WebAssembly is supported in all major browsers (including iOS and Android). You can compile C, C++ and Rust code to Web Assembly.

Microfost has compiled mono runtime into Web Assembly, so you can run .NET code in the browser (but it’s a great story for another post, which will bring us even deeper since you are running IL code in VM (CLR) which runs in another VM (WebAssebly) in the browser). On top of that, Microsoft is developing Blazor – a component-based client-side platform to write C# web applications that run in the browser (bye-bye JavaScript).

As well, right now you cannot interact with DOM from WebAssembly code, so all DOM interactions should be done via proxy JS function.

There are future plans to allow WebAssembly modules to be loaded just like ES6 modules (using <script type='module'>) as well as adding multithreading support, garbage collection, and DOM interaction.


Since most of the modern applications are running in the browsers, I think it’s a great step forward to bring other languages in the game. I can’t wait to see the next versions of WebAssembly standard and the variety of features it will bring us.

NDC Oslo 2019 – Inspiration, Motivation, People


The week is over as well as a great developer conference in Norway. This was my first NDC, and I was lucky to be in the city where it all started back in 2008, Oslo!NDC Oslo 2019

First two days, I have participated in the workshop about Blazor that allows you to write client-side applications in C# and run them in the browser. This is all powered by WebAssembly and mono runtime that is compiled to WebAssembly bytecode. This workshop was guided by ASP.NET Core team members Ryan Nowak and Steve Sanderson.

I am so excited about Blazor that I want to share what I’ve learned during the workshop with you all! A few articles about Blazor and how to use it are coming soon!

The conference started with a motivational speech from David Neal. He encouraged people not to be afraid and start sharing knowledge and experience with others.
One of his awesome drawings:

reverentgeek - you are awesome

I am really excited about the next big release of.NET Core 3.0. Blazor, gRPC, and Windows support are all well-known features we are waiting for, but what about other PRs that was merged before the release? David Fowler and Damian Edwards told us about Hidden gems in ASP.NET Core and .NET Core. Among others: Trimmed and single file publishing options


The next day started with an introduction to WebAssembly by Guy Royse. He saw us how we went from machine codes to assembly language than to C-like languages, then we invented virtual machines to write architecture agnostic code. Now we apply the same principles in a browser.

There was also one excellent presentation by Co-host of .NET Rocks show Richard Campbell, The Moon: Gateway to the Solar System. He showed us what the past, present, and the future of the moon missions was. I hope someday we end up having a moon colony.


A brilliant keynote talk opened the last day. Donovan Brown showed us how Microsoft has transformed TFS to Azure DevOps and went from a three-year delivery cycle to three-week sprints.


In the next talk, Dylan Beattie demonstrated what is architecture in the software development world, how to define, communicate, and reinforce your architecture.


We all hate bugs, we all fix them, but what if you develop software that is a backbone for millions of other programs (e.g., .NET framework)? Then, fixing bugs is not so trivial, and there are many tradeoffs. Great stories from the .NET team Software Engineering Manager Karel Zikmund

.net war stories

It was a great conference full of technical details, inspiration, motivation, and great people.

Thanks, SkyKick for sponsoring this trip and providing a great place to create value for our partners!

Install .NET Core 3.0 on Raspberry Pi

net-coreAs you might now, .NET Core 3.0 SDK and Runtime are available on Raspberry Pi (actually, any Linux ARM32 or ARM64 architectures, as well as x64). With the new 3.0 version of .NET Core you can run your console or ASP.NET Core web site\api projects on a cheap 35$ device without any problems (do not expect high performance).

We are using Raspberry Pi 3 to control our LinkedIn and Instagram automation (I will tell you about these project when they are ready for public) via .NET Core console applications.

Let’s prepare Raspberry Pi 3 to run any .NET Core application:

Install Raspbian

First of all, we need to prepare the Raspberry Pi device:

  1. Download Raspbian image
  2. Download Etcher to write this image to SD card
  3. Insert SD card in to PC
  4. Open Etcher, select SD card and image, press “Flash!”

Install .NET Core 3.0 Runtime

After you are done with SD card, boot your Raspberry Pi and follow the initial configuration wizard.

Default login: pi

Default password:  raspberry

Now you can open a browser and download the latest .NET Core Runtime (at the moment current version is 3.0.0-preview6) from the Microsoft website. You should select ARM32 version of the ASP.NET Core Binaries under Linux section.

When you are done with downloading do the following:

  • Install prerequisites: sudo apt-get install curl libunwind8 gettext
  • Extract downloaded archive: sudo mkdir -p /opt/dotnet && sudo tar zxf ARCHIVE_NAE.tar.gz -C /opt/dotnet
  • Make dotnet visible to the system: sudo ln -s /opt/dotnet/dotnet /usr/local/bin
  • Test it dotnet –version

Now you can publish your project with the ARM32 architecture, copy it to the device, make sure that file can be executed by running chmod +x name_of_project and run it with the ./name_of_project command

Loop Through The Diagonal Elements In Two Dimensional Array

If you need to only loop through the diagonal elements of the two-dimensional array you can use the following C# code (but should be more or less the same for any programming language):

int width = 5;
int height = 17;

bool[,] array = new bool[width, height];

var ratio = width / (float)height;

for (int i = 0, j = 0; i < width && j < height;)
	array[i, j] = true; // only diagonal elements
	if ((i+1) / (float) (j+1) <= ratio)
		if (ratio <= 1)
		if (ratio >= 1)

This is how it will look for “wide” array (width > height):


and for “tall” array (height > width):


and for “square” array (width == height):


5 Simple Steps to Migrate Let’s Encrypt Certificates (certbot) to a New Server

This guide is helpful for people who decided to migrate a website to another web server and have SSL certificates from Let’s Encrypt

letsencryptNote: This article describes the process for Ubuntu 18.04 but can also be used for other Linux distros (maybe with some small changes). As well, replace with your own domain


To successfully migrate your certificates you need to do this 5 simple steps:

  • Archive certificates on the old servers
  • Move them to a new server
  • Extract to the correct location
  • Create symlinks
  • Redirect domain

Let’s go through them in a bit more details:

Archive SSL certificates

First of all, you should find the actual location of the certificates. You can open your nginx or apache configuration to see the location:

cat /etc/nginx/sites-enabled/
 ssl_certificate /etc/letsencrypt/live/; # managed by Certbot
 ssl_certificate_key /etc/letsencrypt/live/; # managed by Certbot

But this is not the actual place where certificates are located. These are symlinks, to see the actual location you should execute the following command:

sudo ls -l /etc/letsencrypt/live/
total 0
lrwxrwxrwx 1 root root 46 Mar 25 13:23 cert.pem -> /etc/letsencrypt/archive/
lrwxrwxrwx 1 root root 47 Mar 25 13:24 chain.pem -> /etc/letsencrypt/archive/
lrwxrwxrwx 1 root root 51 Mar 25 13:24 fullchain.pem -> /etc/letsencrypt/archive/
lrwxrwxrwx 1 root root 49 Mar 25 13:24 privkey.pem -> /etc/letsencrypt/archive/

You also need to archive renewal config for your website. It’s located in the /etc/letsencrypt/renewal/<domain>/ folder. To archive all files, run the following:

sudo tar -chvzf certs.tar.gz /etc/letsencrypt/archive/ /etc/letsencrypt/renewal/

Now you can copy this archive to the web site location, so you can download it to the new server in the next step:

cp certs.tar.gz /var/www/

Move SSL certificates

This is a really simple step. Log in to the new server and download certificates:


Extract to the correct location

Now you need to extract files to the correct location on the new server. Insite archive we already have the correct folder structure, so you can extract it “as is” if you are in the root folder:

cd /
sudo tar -xvf ~/certs.tar.gz

Note: If on the new server you have different Linux distro or custom letsencrypt installation you may need to manually copy files to the correct location.

Create symlinks

For the correct work, you need to create symlinks in the live folder for your domain:

sudo ln -s /etc/letsencrypt/archive/ /etc/letsencrypt/live/
sudo ln -s /etc/letsencrypt/archive/ /etc/letsencrypt/live/
sudo ln -s /etc/letsencrypt/archive/ /etc/letsencrypt/live/
sudo ln -s /etc/letsencrypt/archive/ /etc/letsencrypt/live/

Point domain to the new server

Update nginx or apache configuration to use new certificates (for nginx):

 ssl_certificate /etc/letsencrypt/live/; # managed by Certbot
 ssl_certificate_key /etc/letsencrypt/live/; # managed by Certbot

Go to your DNS manager and change the A record, so it is pointing to the new server.

Note: At this point, you should have all the content and database migrated to the new server, so you can safely switch your domain to the new server.

This step is required to successfully run a test renewal:

sudo letsencrypt renew --dry-run

You do not need to modify cron tasks for certbot since it’s configured in a way that will renew all certificates:

sudo cat /etc/cron.d/certbot


0 */12 * * * root test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(43200))' && certbot -q renew

That’s it, the domain name is pointing to the new server and certificates can be automatically renewed

Divbyte – Software Development Company

divbyteI would like to share with everyone the project I have been working on for some time.

Since I love solving problems, communicating with people, and programming (significantly more than 40 hours per week), and working on pet-projects became a bit boring, I decided to start helping real people with real problems.

I had a choice, find a part-time freelance job, which I did for a few times, or move to the next level, establish my own software development company and, together with other enthusiasts, start helping people on a larger scale. I discussed this idea with my developer friends and Tatiana, a top-notch business development specialist, and we decided to go for it!

That’s how Divbyte was born!

The group of hardcore software engineers and the excellent Biz Dev specialist, what could be better for a great start? Tatiana immediately found a way to offer our services on the market. From the engineering side, we deliver high-quality results on time, without exceeding the original budget.

You can see reviews of satisfied clients for the first projects on the website (we will publish case studies soon).
The team has grown since then and ready to take new, more significant projects. Today we are launching our website and want to share it with you! If you need a team of excellent developers, designers, architects or QA engineers – don’t hesitate to contact us!

We are ready to solve your problems.

ASP.NET Core + PostgreSQL + Docker + Bitbucket = ♥

aspnet-core-bitbucket-docker-logoHow to build, test and deploy your ASP.NET Core application in a single click (commit & push)? In this article I will answer this question and show you how to configure CI and CD with Docker and Bitbucket.

We will develop simple ASP.NET Core application with single API method to save string value in database. We will use PostgreSQL as a storage for those values. All code will be hosted in bitbucket git repository and we will configure Bitbucket pipelines to build our application, create docker image and push it to docker hub every time we push code to the remote repository. After our image has been pushed to Docker Hub it will trigger webhook on our “production” server which will pull uploaded image from Docker Hub and restart docker-composer with new image

Docker and docker-compose

Let’s start with some basic tools we going to use in this article. Those who are already familiar with docker and docker-compose can skip directly to the next chapter.

What is docker? Here is an official answer. And simple one for those who never worked with containers before but has experience with virtual machines:

docker container – lightweight “virtual machine”

docker image – initial snapshot of the “vm”

Good explanation from Stack Overflow about difference between container and image: “An instance of an image is called a container. You have an image, which is a set of layers as you describe. If you start this image, you have a running container of this image. You can have many running containers of the same image.” Thomas Uhrig

Docker Hub – cloud based registry of docker images. You can create your own image and then push it to docker hub (similar to github for your code)

docker cli tools – set of tools to manage images and containers, as well to pull and push images from docker hub and do many other things

Dockerfile – file that contains instruction to create an image

docker-compose – tool to define and run multiple containers as a single application

ASP.NET Core application

Create WebAPI application:

mkdir test
cd test 
dotnet new webapi

Add PostgreSQL to your project (edit test.csproj file):

<Project Sdk="Microsoft.NET.Sdk.Web">


    <Folder Include="wwwroot\" />

    <PackageReference Include="Microsoft.AspNetCore" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.2" />
    <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="1.1.1" />
	<PackageReference Include="Microsoft.EntityFrameworkCore" Version="1.1.1" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="1.1.0" />
	<PackageReference Include="Npgsql.EntityFrameworkCore.PostgreSQL" Version="1.1.0" />
    <PackageReference Include="Npgsql.EntityFrameworkCore.PostgreSQL.Design" Version="1.1.0" />


Add AppDbContext.cs and Value.cs:

using Microsoft.EntityFrameworkCore;

namespace test
    public class AppDbContext : DbContext
        public AppDbContext(DbContextOptions<AppDbContext> options) :base(options)

        public DbSet<Value> Values { get; set; }

        protected override void OnModelCreating(ModelBuilder builder)
namespace test
    public class Value
        public int Id { get; set; }
        public string Date { get; set; }

Edit Startup.cs

public void ConfigureServices(IServiceCollection services)
	// Add framework services.
	var sqlConnectionString = Configuration.GetConnectionString("DataAccessPostgreSqlProvider");

	services.AddDbContext<AppDbContext>(options =>

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)

	using (var context = app.ApplicationServices.GetService(typeof(AppDbContext)) as AppDbContext)
		// Other db initialization code.

And now, specify connection string in application.json

  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Warning"
  "ConnectionStrings": {
    "DataAccessPostgreSqlProvider": "User ID=test;Password=test;Host=testpostgres;Port=5432;Database=test;Pooling=true;"

You should remember Host, ID, Password (Database should be the same as ID), we will use those values during postgresql container configuration.

Restore, build and publish:

dotnet restore test.csproj
dotnet build test.csproj
dotnet publish -c Release -o publish_output test.csproj

If you try to start it right now, you will get error that postgresql port is not reachable.

If you have postgresql installed locally, you can change Host, ID, Password to your local one, and run it again to test.

Build Docker image

Now we have application artifacts in a publish_output folder, it is time to build our docker image.

Create Dockerfile file in project root:

FROM microsoft/aspnetcore:1.1
COPY publish_output .
ENTRYPOINT ["dotnet", "test.dll"]

Here we define that our image should be based on microsoft/aspnetcore:1.1 from Docker Hub, then we say that we expose port 80, copy our application to the root of container and define entry point (script that will be executed when we start our container)

You can already test it by running:

docker build -t test-image .

This command will create image from Dockerfile with test-image name

You can run it:

docker run test-image

Create docker-compose.yml

In this file we will describe dependencies between our application image and official postgres image

version: '2'

     image: postgres
     restart: always
         POSTGRES_USER: test
         POSTGRES_PASSWORD: test
       - pgdata:/var/lib/postgresql/data
    image: testapp
    restart: always
      context: .
      dockerfile: Dockerfile
       - 5000:80
       - "testpostgres"


In this file we describe two services, their parameters and dependencies between them. As we can see we use standard postgres image from Docker Hub and pass some parameters. Service name should be the same as specified in connection string, user and password as well. Then we specify docker volume – persistent data storage for our postgres container.

In second part we define our application service by specifying which Dockerfile composer should use during build and what ports we forward from host to container.

Our postgres instance will be reachable by “service name”, so from our application we can connect via testpostgres to the database server.

To build service:

docker-compose build

And to run it locally:

docker-compose up -d

Configure CI

Create account and repository on Docker Hub, for example user name will be username and repository name testapp

Now you should enable pipelines on bitbucket and create bitbucket-pipelines.yml in the root of your repo:

image: microsoft/aspnetcore-build:1.0-1.1

    - step:
        script: # Modify the commands below to build your repository.
          - dotnet restore test.csproj
          - dotnet buildtest.csproj
          - dotnet publish -c Release -v n -o ./publish_output test.csproj
          - docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
          - docker build -t username/testapp:dev .
          - docker push username/test:dev

  docker: true

Here we have defined where to build our application (inside microsoft/aspnetcore-build:1.0-1.1 image) and what to do during build (script section)

As you can see, with last three steps we connect to docker hub, build our image and push it to remote repo.

Run on remote server

On our “production server” we can create similar docker-compose file to use images from Docker Hub:

version: '2'

     image: postgres
     restart: always
         POSTGRES_USER: test
         POSTGRES_PASSWORD: test
       - pgdata:/var/lib/postgresql/data
    image: username/testapp:dev
    restart: always
       - 5000:80
       - "testpostgres"


As you can see it’s almost the same but we have removed build section and renamed image property, so we will use previously pushed image from Docker Hub.



docker-compose pull
docker-compose up -d

Go to http://localhost:5000/api/Values


Does anyone read LinkedIn?

LinkedIn is the most popular business-oriented social network. A lot of us have an account there, but not a lot write something there. Most of the time you can see posts from recruiters about new “awesome” position in “the best” company in the world, but almost no articles about technical aspects of the work.

I have asked myself, is it worth to share my “IT” related articles on LinkedIn, does anyone gonna read them?

So, I made a post there ten days ago: “Hi, Guys! Can you somehow let me know if you see this post (like or comment). I want to see how many people are reading this feed. Just a small experiment. Thanks!

After 10 days I have 10014 views, 131 likes and 13 comments, and that’s with 1140 connections.

Here is detailed statistic about views:

Developers is the most common group in my network, so it is clear that they see this post the most. Recruiters on the second place 🙂

Currently, I am living in The Netherlands, close to Amsterdam, so a lot of views came from people living in the same area. I am surprised that Ukraine is not in the list.

But in the next screen you will see only Ukrainian companies. The biggest outsourcing companies in Ukraine, Luxoft and Ciklum are missing….

And of course the most views came from my 2nd degree network:

I think that LinkedIn is good enough to share work related articles. You will get quite a lot of views and most of them will be from people who share your work-related interests.

P.S. Join my network: