In order to continue our effort of being modular and well factored we don’t just provide the entire .NET Core platform as a single NuGet package. Instead, it’s a set of fine grained NuGet packages:

.NET Core is a general purpose, modular, cross-platform and open source implementation of the .NET Standard.

Here are the main characteristics of .NET Core:1

  • Cross-platform: .NET Core provides key functionality to implement the app features you need and reuse this code regardless of your platform target. It currently supports three main operating systems (OS): Windows, Linux and macOS. You can write apps and libraries that run unmodified across supported operating systems. To see the list of supported operating systems, visit .NET Core roadmap.

  • Open source: .NET Core is one of the many projects under the stewardship of the .NET Foundation and is available on GitHub. Having .NET Core as an open source project promotes a more transparent development process and promotes an active and engaged community.

  • Flexible deployment: there are two main ways to deploy your app: framework-dependent deployment or self-contained deployment. With framework-dependent deployment, only your app and third-party dependencies are installed and your app depends on a system-wide version of .NET Core to be present. With self-contained deployment, the .NET Core version used to build your application is also deployed along with your app and third-party dependencies and can run side-by-side with other versions. For more information, see .NET Core Application Deployment.

  • Modular: .NET Core is modular because it's released through NuGet in smaller assembly packages. Rather than one large assembly that contains most of the core functionality, .NET Core is made available as smaller feature-centric packages. This enables a more agile development model for us and allows you to optimize your app to include just the NuGet packages you need. The benefits of a smaller app surface area include tighter security, reduced servicing, improved performance, and decreased costs in a pay-for-what-you-use model.

.NET Core is composed of the following parts:

  • .NET runtime, which provides a type system, assembly loading, a garbage collector, native interop and other basic services.
  • A set of framework libraries, which provide primitive data types, app composition types and fundamental utilities.
  • set of SDK tools and language compilers that enable the base developer experience, available in the .NET Core SDK.
  • The 'dotnet' app host, which is used to launch .NET Core apps. It selects the runtime and hosts the runtime, provides an assembly loading policy and launches the app. The same host is also used to launch SDK tools in much the same way.

Relationship to .NET Standard

The .NET Standard is an API spec that describes the consistent set of .NET APIs that developers can expect in each .NET implementation. .NET implementations need to implement this spec in order to be considered .NET Standard-compliant and to support libraries that target .NET Standard.

.NET Core implements .NET Standard, and therefore supports .NET Standard libraries.

The primary focus of Mono in recent years is mobile platforms, while .NET Core is focused on cloud workloads.

The .NET Core team is currently focused on Web, Cloud, Microservices, Containers, and Console applications. 

Docker: Packaging your apps to deploy and run anywhere

The most obvious horizontal scenario for using Docker and .NET applications is for production deployment and hosting. It turns out that production is just one scenario and the other ones are equally useful. These scenarios are not specific to .NET, but should apply to most developer platforms.

  • Low friction install — You can try out .NET without a local install. Just download a Docker image with .NET in it.

  • Develop in a container — You can develop in a consistent environment, making development and production environments similar (avoiding issues like global state on developer machines). Visual Studio Tools for Docker even enable you to start a container directly from Visual Studio.

  • Test in a container — You can test in a container, reducing failures due to incorrectly configured environments or other changes left behind from the last test.

  • Build in a container — You can build code in a container, avoiding the need to correctly configure shared build machines for multiple environments but instead move to a “BYOC” (bring your own container) approach.

  • Deployment in all environments — You can deploy an image through all of your environments. This approach reduces failures due to configuration differences, typically changing the image behavior via external configuration (for example, injected environment variables).

When you design a container image, you will see an ENTRYPOINT definition in the Dockerfile. This defines the process whose lifetime controls the lifetime of the container. When the process completes, the container lifecycle ends. Containers might represent long-running processes like web servers, but can also represent short-lived processes like batch jobs, which formerly might have been implemented as Azure WebJobs.

If the process fails, the container ends, and the orchestrator takes over. If the orchestrator was configured to keep five instances running and one fails, the orchestrator will create another container instance to replace the failed process. In a batch job, the process is started with parameters. When the process completes, the work is complete. This guidance drills-down on orchestrators, later on.

You might find a scenario where you want multiple processes running in a single container. For that scenario, since there can be only one entry point per container, you could run a script within the container that launches as many programs as needed. For example, you can use Supervisor or a similar tool to take care of launching multiple processes inside a single container. However, even though you can find architectures that hold multiple processes per container, that approach it is not very common.

Microservices offer great benefits but also raise huge new challenges. Microservice architecture patterns are fundamental pillars when creating a microservice-based application.

a microservices architecture is an approach to building a server application as a set of small services. Each service runs in its own process and communicates with other processes using protocols such as HTTP/HTTPS, WebSockets, or AMQP

Each microservice implements a specific end-to-end domain or business capability within a certain context boundary, and each must be developed autonomously and be deployable independently.

,each microservice should own its related domain data model and domain logic (sovereignty and decentralized data management) based on different data storage technologies (SQL, NoSQL) and different programming languages.

What size should a microservice be? When developing a microservice, size should not be the important point. Instead, the important point should be to create loosely coupled services so you have autonomy of development, deployment, and scale, for each service.

when identifying and designing microservices, you should try to make them as small as possible as long as you do not have too many direct dependencies with other microservices.

More important than the size of the microservice is the internal cohesion it must have and its independence from other services.

Why a microservices architecture? In short, it provides long-term agility. Microservices enable better maintainability in complex, large, and highly-scalable systems by letting you create applications based on many independently deployable services that each have granular and autonomous lifecycles.

As an additional benefit, microservices can scale out independently. Instead of having a single monolithic application that you must scale out as a unit, you can instead scale out specific microservices. That way, you can scale just the functional area that needs more processing power or network bandwidth to support demand, rather than scaling out other areas of the application that do not need to be scaled. That means cost savings because you need less hardware.

the microservices approach allows agile changes and rapid iteration of each microservice, because you can change specific, small areas of complex, large, and scalable applications.

Architecting fine-grained microservices-based applications enables continuous integration and continuous delivery practices. It also accelerates delivery of new functions into the application. Fine-grained composition of applications also allows you to run and test microservices in isolation, and to evolve them autonomously while maintaining clear contracts between them. As long as you do not change the interfaces or contracts, you can change the internal implementation of any microservice or add new functionality without breaking other microservices.

The following are important aspects to enable success in going into production with a microservices-based system:

  • Monitoring and health checks of the services and infrastructure.

  • Scalable infrastructure for the services (that is, cloud and orchestrators).

  • Security design and implementation at multiple levels: authentication, authorization, secrets management, secure communication, etc.

  • Rapid application delivery, usually with different teams focusing on different microservices.

  • DevOps and CI/CD practices and infrastructure.

In a monolithic application running on a single process, components invoke one another using language-level method or function calls. These can be strongly coupled if you are creating objects with code (for example, new ClassName()), or can be invoked in a decoupled way if you are using Dependency Injection by referencing abstractions rather than concrete object instances. Either way, the objects are running within the same process.

The biggest challenge when changing from a monolithic application to a microservices-based application lies in changing the communication mechanism.

A direct conversion from in-process method calls into RPC calls to services will cause a chatty and not efficient communication that will not perform well in distributed environments.

The challenges of designing distributed system properly are well enough known that there is even a canon known as the The fallacies of distributed computingthat lists assumptions that developers often make when moving from monolithic to distributed designs.

There is not one solution, but several. One solution involves isolating the business microservices as much as possible. You then use asynchronous communication between the internal microservices and replace fine-grained communication that is typical in intra-process communication between objects with coarser-grained communication. You can do this by grouping calls, and by returning data that aggregates the results of multiple internal calls, to the client.

A microservices-based application is a distributed system running on multiple processes or services, usually even across multiple servers or hosts. Each service instance is typically a process. Therefore, services must interact using an inter-process communication protocol such as HTTP, AMQP, or a binary protocol like TCP, depending on the nature of each service.

The microservice community promotes the philosophy ofsmart endpoints and dumb pipes.” This slogan encourages a design that is as decoupled as possible between microservices, and as cohesive as possible within a single microservice.

As explained earlier, each microservice owns its own data and its own domain logic. But the microservices composing an end-to-end application are usually simply choreographed by using REST communications rather than complex protocols such as WS-* and flexible event-driven communications instead of centralized business-process-orchestrators.

The two commonly used protocols are HTTP request/response with resource APIs (when querying most of all), and lightweight asynchronous messaging when communicating updates across multiple microservices.

Communication types

Client and services can communicate through many different types of communication, each one targeting a different scenario and goals. Initially, those types of communications can be classified in two axes.

The first axis is defining if the protocol is synchronous or asynchronous:

  • Synchronous protocol. HTTP is a synchronous protocol. The client sends a request and waits for a response from the service. That is independent of the client code execution that could be synchronous (thread is blocked) or asynchronous (thread is not blocked, and the response will reach a callback eventually). The important point here is that the protocol (HTTP/HTTPS) is synchronous and the client code can only continue its task when it receives the HTTP server response.

  • Asynchronous protocol. Other protocols like AMQP (a protocol supported by many operating systems and cloud environments) use asynchronous messages. The client code or message sender usually does not wait for a response. It just sends the message as when sending a message to a RabbitMQ queue or any other message broker.

The second axis is defining if the communication has a single receiver or multiple receivers:

  • Single receiver. Each request must be processed by exactly one receiver or service. An example of this communication is the Command pattern.

  • Multiple receivers. Each request can be processed by zero to multiple receivers. This type of communication must be asynchronous. An example is the publish/subscribe mechanism used in patterns like Event-driven architecture. This is based on an event-bus interface or message broker when propagating data updates between multiple microservices through events; it is usually implemented through a service bus or similar artifact like Azure Service Bus by using topics and subscriptions.

A microservice-based application will often use a combination of these communication styles. The most common type is single-receiver communication with a synchronous protocol like HTTP/HTTPS when invoking a regular Web API HTTP service. Microservices also typically use messaging protocols for asynchronous communication between microservices.

Asynchronous microservice integration enforces microservice’s autonomy

As mentioned, the important point when building a microservices-based application is the way you integrate your microservices. Ideally, you should try to minimize the communication between the internal microservices. The less communications between microservices, the better. But of course, in many cases you will have to somehow integrate the microservices. When you need to do that, the critical rule here is that the communication between the microservices should be asynchronous. That does not mean that you have to use a specific protocol (for example, asynchronous messaging versus synchronous HTTP). It just means that the communication between microservices should be done only by propagating data asynchronously, but try not to depend on other internal microservices as part of the initial service’s HTTP request/response operation.

If possible, never depend on synchronous communication (request/response) between multiple microservices, not even for queries. The goal of each microservice is to be autonomous and available to the client consumer, even if the other services that are part of the end-to-end application are down or unhealthy. If you think you need to make a call from one microservice to other microservices (like performing an HTTP request for a data query) in order to be able to provide a response to a client application, you have an architecture that will not be resilient when some microservices fail.

Moreover, having HTTP dependencies between microservices, like when creating long request/response cycles with HTTP request chains, as shown in the first part of the Figure 4-15, not only makes your microservices not autonomous but also their performance is impacted as soon as one of the services in that chain is not performing well.

The more you add synchronous dependencies between microservices, such as query requests, the worse the overall response time gets for the client apps.

As noted earlier in the section Identifying domain-model boundaries for each microservice, duplicating some data across several microservices is not an incorrect design—on the contrary, when doing that you can translate the data into the specific language or terms of that additional domain or Bounded Context.

You might use any protocol to communicate and propagate data asynchronously across microservices in order to have eventual consistency. As mentioned, you could use integration events using an event bus or message broker or you could even use HTTP by polling the other services instead. It does not matter. The important rule is to not create synchronous dependencies between your microservices.

Communication styles

Request/response communication with HTTP and REST

When a client uses request/response communication, it sends a request to a service, then the service processes the request and sends back a response. Request/response communication is especially well suited for querying data for a real-time UI (a live user interface) from client apps. Therefore, in a microservice architecture you will probably use this communication mechanism for most queries, as shown in Figure 4-16.

When a client uses request/response communication, it assumes that the response will arrive in a short time, typically less than a second, or a few seconds at most. For delayed responses, you need to implement asynchronous communication based on messaging patterns and messaging technologies, which is a different approach that we explain in the next section.

A popular architectural style for request/response communication is REST. This approach is based on, and tightly coupled to, the HTTP protocol, embracing HTTP verbs like GET, POST, and PUT. REST is the most commonly used architectural communication approach when creating services. You can implement REST services when you develop ASP.NET Core Web API services.

Push and real-time communication based on HTTP

Another possibility (usually for different purposes than REST) is a real-time and one-to-many communication with higher-level frameworks such as ASP.NET SignalR and protocols such as WebSockets.

al-time HTTP communication means that you can have server code pushing content to connected clients as the data becomes available, rather than having the server wait for a client to request new data.

Since communication is in real time, client apps show the changes almost instantly. This is usually handled by a protocol such as WebSockets, using many WebSockets connections (one per client). A typical example is when a service communicates a change in the score of a sports game to many client web apps simultaneously.

.NET 介绍的更多相关文章

  1. CSS3 background-image背景图片相关介绍

    这里将会介绍如何通过background-image设置背景图片,以及背景图片的平铺.拉伸.偏移.设置大小等操作. 1. 背景图片样式分类 CSS中设置元素背景图片及其背景图片样式的属性主要以下几个: ...

  2. MySQL高级知识- MySQL的架构介绍

    [TOC] 1.MySQL 简介 概述 MySQL是一个关系型数据库管理系统,由瑞典MySQL AB公司开发,目前属于Oracle公司. MySQL是一种关联数据库管理系统,将数据保存在不同的表中,而 ...

  3. Windows Server 2012 NIC Teaming介绍及注意事项

    Windows Server 2012 NIC Teaming介绍及注意事项 转载自:http://www.it165.net/os/html/201303/4799.html Windows Ser ...

  4. Linux下服务器端开发流程及相关工具介绍(C++)

    去年刚毕业来公司后,做为新人,发现很多东西都没有文档,各种工具和地址都是口口相传的,而且很多时候都是不知道有哪些工具可以使用,所以当时就想把自己接触到的这些东西记录下来,为后来者提供参考,相当于一个路 ...

  5. JavaScript var关键字、变量的状态、异常处理、命名规范等介绍

    本篇主要介绍var关键字.变量的undefined和null状态.异常处理.命名规范. 目录 1. var 关键字:介绍var关键字的使用. 2. 变量的状态:介绍变量的未定义.已定义未赋值.已定义已 ...

  6. HTML DOM 介绍

    本篇主要介绍DOM内容.DOM 节点.节点属性以及获取HTML元素的方法. 目录 1. 介绍 DOM:介绍DOM,以及对DOM分类和功能的说明. 2. DOM 节点:介绍DOM节点分类和节点层次. 3 ...

  7. HTML 事件(一) 事件的介绍

    本篇主要介绍HTML中的事件知识:事件相关术语.DOM事件规范.事件对象. 其他事件文章 1. HTML 事件(一) 事件的介绍 2. HTML 事件(二) 事件的注册与注销 3. HTML 事件(三 ...

  8. HTML5 介绍

    本篇主要介绍HTML5规范的内容和页面上的架构变动. 目录 1. HTML5介绍 1.1 介绍 1.2 内容 1.3 浏览器支持情况 2. 创建HTML5页面 2.1 <!DOCTYPE> ...

  9. ExtJS 4.2 介绍

    本篇介绍ExtJS相关知识,是以ExtJS4.2.1版本为基础进行说明,包括:ExtJS的特点.MVC模式.4.2.1GPL版本资源的下载和说明以及4种主题的演示. 目录 1. 介绍 1.1 说明 1 ...

  10. ExtJS 4.2 组件介绍

    目录 1. 介绍 1.1 说明 1.2 组件分类 1.3 组件名称 1.4 组件结构 2. 组件的创建方式 2.1 Ext.create()创建 2.2 xtype创建 1. 介绍 1.1 说明 Ex ...

随机推荐

  1. 排序(I)

    NSSortDescriptor 排序 Person类 #import <Foundation/Foundation.h> @interface Person : NSObject @pr ...

  2. 排名函数——ROW_NUMBER()、RANK()、DENSE_RANK()和NTILE(n)

    ROW_NUMBER()函数:行号,根据作为参数传递给这个函数的ORDER BY子句的值,返回一个不断递增的整数值.如果ROW_NUMBER的ORDER BY的值和结果集中的顺序相匹配,返回值将是递增 ...

  3. bootstrapValidator验证表单后清除当次验证的方法

    用bootstrapValidator的resetForm()方法: <!-- // create server begin --> <div class="modal f ...

  4. 关于Ajax无法下载文件到浏览器本地的问题

    最近在做网站的时候遇到这样一个功能,在如图所示的页面中,需要用户点击链接的时候,能够以异步Ajax的方式判断服务器中是否存储有相应的Excel文件,如果没有的话就提示用户没有找到,如果有的话就下载到用 ...

  5. POJ 3320 Jessica's Reading Problem (尺取法)

    Jessica's a very lovely girl wooed by lots of boys. Recently she has a problem. The final exam is co ...

  6. slideDown留言板

    <!doctype html> <html lang="en"> <head> <meta http-equiv="Conten ...

  7. 基于 SSL 的 Nginx 反向代理

    基于 SSL 的 Nginx 反向代理 描述: 线上zabbix因机房网络问题,外网接口无法对外访问,因此采用同机房的另外一台服务器做反向代理. 线上用于zabbix提供web访问的Nginx,采用h ...

  8. 点击button后刷新了页面

    今天遇到一个特别奇怪的事,在页面中使用button标签,添加了点击事件onclic,点击的时候倒是执行了绑定的方法,但页面被刷新了! 什么鬼?我没与提交表单啊! 原来,button默认具有提交表单的动 ...

  9. 《web前端设计基础——HTML5、CSS3、JavaScript》 张树明版 简答题简单整理

    web前端设计基础——HTML5.CSS3.JavaScript 简答题整理 第一章 (1)解释一下名词的含义:IP地址.URL.域名   iP定义了如何连入因特网,以及数据如何在主机间传输的标准. ...

  10. k8s debug

    https://feisky.gitbooks.io/kubernetes/components/api-aggregation.html API convention Kubernetes deep ...