A short video showing the client IP address moving around the cluster to quickly restore connectivity for your users running on Acropolis File Services.
Archives for February 2017
A quick video showing the fail-over for Acorpolis File Services. The deployment setups a lot of the need peices but you will still have to set a schedule and map the new container(vStore) that is being used by AFS to the remote site.
Remember you want the number of FSVMS making up the file server to be the same of less than the number of nodes at the remote site.
With the new release of Docker Datacenter 2.1 it’s clear the Docker is very serious about the enterprise and providing the tooling that is very easy to use. Docker has made the leap to supporting enterprise applications with its embedded security and ease of use. DCC 2.1 and Docker-engine-cs 1.13 give the additional control needed for operations and development teams to control their own experience.
Docker datacenter continues to build on containers as a service. In the 1.12 release of DDC it enabled agility and portability for continuous integration and started on the journey of protecting the development supply chain throughout the whole lifecycle. The new release of DDC’s focuses on security, specifically secret management.
The previous version of DDC already had wealth of security features
• LDAP/AD integration
• Role based access control for teams
• SS0 and push/pull images with Docker Trusted Registry
• Imaging signing – prevent running a container unless image signed by member of a designated
• Out of the box TLS with easy setup, including cert rotation.
With the DDC 2.1 the march on security is being made successful by allowing both operations and developers to have a usable system without having to lean into security for support and help. The native integration with the management plane allows for end to end container lifecycle management. You also inherit the model that’s independent no matter the infrastructure you’re running on it will work. It can be made to be dynamic and ephemeral like the containers it’s managing. This is why I feel PAAS is dead. With so much choice and security you don’t have to limit yourself where you deploy to, a very similar design decision to Nutanix by enabling choice. Choice gives you access to more developers and the freedom to color outside the lines of the guardrails that a PAAS solution may empose.
Docker Datacenter Secrets Architecture
1) Everything in the store is encrypted, notably that includes all of the data that is stored in the orchestration . With least privlege — only node is distributed to the containers that need them. Since the management mayor is scalable you also get that for your key management as well. Due to the management layer being so easy to set up you don’t have developers embedding secrets in Github to get a quick work around.
2) Containers and the filesystem makes secret only available to only the designated app . Docker expose secrets to the application via a file system that is stored in memory. The same rotation of certificates for the management letter also happens with the certificates for the application. In the diagram above the red service only talks of the red service and the blue service is isolated by itself even though it’s running on the same node as the red service/application.
3) If you decide that you want to integrate with a third-party application like Twitter and be easily done. Your Twitter credentials can be stored in the raft cluster which is your manager nodes. When you go to create the twitter app you give it access to the credentials and even do a “service-update” if you need swap them out without the need to touch every node in your environment.
With a simple interface for both developers and IT operations both have a pain-free way to do their jobs and provide a secure environment. By not creating road blocks and slowing down development or operations teams will get automatic by in.
|Configurable Item||Maximum Value|
|Number of Connections per FSVM||250 for 12 GB of memory
500 for 16 GB of memory
1000 for 24 GB of memory
1500 for 32 GB of memory
2000 for 40 GB of memory
2500 for 60 GB of memory
4000 for 96 GB of memory
|Number of FSVMs||16 or equal to the number of CVMs (choose the lowest number)|
|Max RAM per FSVM||96 GB (tested)|
|Max vCPUs per FSVM||12|
|Data size for home share||200 TB per FSVM|
|Data size for General Purpose Share||40 TB|
|Share Name||80 characters|
|File Server Name||15 characters|
|Share Description||80 characters|
|Windows Previous Version||24 (1 per hour) adjustable with support|
|Throttle Bandwith limit||2048 MBps|
|Data Protection Bandwith limit||2048 MBps|
|Max recovery time objective for Async DR||60 minutes|