11/6/22

HashiCorp Vault OSS to Vault Enterprise Webinar

My name is Andrew Huddleston a DevOps Solution Architect at iTrellis. I've been working with HashiCorp products for about six years now. And I'm certified in both Terraform and Vault along with all three major clouds. Just so you guys know that I am a real person, and I am who is the guy in the picture here. So I look a little different, different haircut but maybe a few more grays in my beard. But just wanted to let you know that I am the same guy that shows in the picture but I'm gonna turn my camera off during it because of the overlay on the slides. I didn't want to intrude on the point we're trying to make across. A few housekeeping items, chat is disabled but there is a Q&A section at the top that you can ask questions in. A co-worker of mine might respond to you during the webinar but we will cover all unanswered questions at the end of the webinar just so it doesn't interrupt. So microphones are also muted as you've probably seen and cameras disabled just to cut down on distractions. But like I said at the end of the webinar, I will enable microphones so we can go over any Q&A questions that were posted. So let's get started here.

First, let's get introductions out of the way. iTrellis is a consulting company that builds, configures, integrates, and operates applications across multiple clouds on premise with tools and frameworks of your choice. HashiCorp is a software infrastructure company that offers a suite of multi-cloud infrastructure automation products that work both on-prem and in the cloud. And we have at least one person from HashiCorp here. Joshua Bradley, thank you for joining us. And if you have anything to add on HashiCorp before I get going. And let's see you should be able to unmute yourself. But if you can't, I can set that.

You should be able to now. There we go. You hear me alright? Yeah okay. I'll make my introduction brief so I'm as Andrew said Joshua Bradley, I work at HashiCorp and I've been here now going on three years, but I've worked with our tooling for wow I'm getting old close to a decade.

For those who don't know the full stack, HashiCorp does make automation tooling. We're a big believers that the tools should encourage workflows. So as our Founders would say we believe in workflows over Technologies. And for those who don't know, we also make an enterprise product, so I guess that's probably why a lot of you are maybe attending this webinar today. And the idea is a lot of our open-source tools are 80%. And you know a big thing when you want to go wide with the tools is how do we collaborate, and how do we get more sophisticated with the support systems around it. And so, you know that's really what the enterprise products we have developed and the cloud products we've developed for HashiCorp's ecosystem of tools is all about. So, I'm excited to see Andrew get the upgrade going because that is sometimes the hardest part going from one workflow to an upgraded workflow. Great! Thanks Joshua! So, with that let's get going. Yes, that will work.

So, what is Vault? First, since this is a webinar on how to upgrade Vault OSS to Vault Enterprise, I am assuming you have somewhat knowledgeable. But for those that don't, I'll give a quick overview on what vault is. So where do applications store login credentials? It might store them in a file, it might even store those in its own lines of code. But it's insecure to store sensitive information this way. Furthermore, what if you have 10 applications that do this? Sensitive data might get scattered in many different places. These are some of the problems that HashiCorp Vault solves. In short, HashiCorp Vault is an identity-based secrets and encryption management system. A secret is anything that you want to tightly control access to, such as API encryption keys, passwords, or certificates. And using Vault's UI, CLI or API, access to secrets and other sensitive data can be securely stored, managed, tightly controlled and auditable.

So, that all sounds nice, but what if you don't know what a secret is? A secret might just mean a password, like this super secure password you see on the screen. I have here could be service count passwords, root user passwords. You could store any type of password in there. It could mean a token. For example, if you have a chatbot it might be like a personal access token, which is like a password but pre-authorized to get access to on certain things, without having to do something like two-factor Authentication. it could also be database connection strings, like something you might put into application code but now you it could be a secure call to Vault instead of hard-coded plain text. And it could mean a certificate, which is too big to show in this slide. But a web server could call Vault to get the certificate, so that it isn't stored on the web server for some hacker to gain access to. These are some only some of the examples. Really a secret can mean anything that you want to tightly control access to. It doesn't even have to be a password. It doesn't have to be something that needs to be encrypted. But if you want to store it in Vault you can.

So, there are three different versions of Vault. There's Vault OSS which is the open-source version of Vault. You can run it locally on your workstation to play around with it. You can install it on servers to centralize your secrets. But if you use Vault OSS to centralize secrets in your production environment, you may encounter requirements for compliance, scale, or availability that lead you to evaluate Vault Enterprise to solve those technical challenges. There's HCP vault which HCP stands for HashiCorp Cloud Platform. It's a hosted version of Vault which is operated by Hashicorp to allow organizations to get up and running quickly. HCP Vault uses the same binary as self-hosted vault which means you will have a consistent user experience. You can use the same Vault clients to communicate with HCP Vault as you communicate with self-posted Vault, meaning like your CLI tools locally on your workstation. It's the same CLI tools no matter what version of Vault you're running. And then there's Vault Enterprise, the reason why we're here today. Vault Enterprise is the self-managed on-premise supported version of Vault. Some key features you get from Vault Enterprise include Disaster Recovery, in case of region outages or server failure. Hardware security module support, so HSM support. Automated upgrades so you don't have to worry about keeping up with all the new versions. Performance standbys which sets up your secondary servers as read-only. So if you have read heavy applications, they won't interrupt poor Vault operations. You can also get logic-based security policies through Hashicorp Sentinel with fault Enterprise. An example might be that you can create in-depth security logic with Sentinel, so that certain secrets require multiple employees to sign off on their use. And if you need very fine grain control of your secrets, you will need Vault Enterprise so you can control where your secrets replicate to. An example of this might be if you have to follow GDPR compliance, you can prevent American secrets from being on your European servers and vice versa. And today we're going to be talking about upgrading Vault OSS to Vault Enterprise. We won't be getting into HCP Vault. Migrating to that could be another webinar all in itself. So why should you use Vault? Why should you choose it over any other secret storage solution? Azure, AWS, GCP all have secret storage. The problem with storing it in places like that is sprawl. Most organizations today have credentials sprawled across their organizations. Passwords, API keys, and credentials are stored in plain text, app source code, config files and other locations. Because these credentials live everywhere, the sprawl can make it difficult and daunting to really know who has access and authorization to what. Especially if your organization is multi or hybrid cloud. Because you might have some secrets in AWS some in Azure and some on premise, Vault takes all these credentials and centralizes them so that they are defined in one location which reduces unwanted exposure of your credentials. But Vault takes it a few steps further by making sure users, apps, and systems are authenticated and explicitly authorized to access resources while also providing an audit trail that captures and preserves a history of clients actions. Vault also encrypts all of its secrets. So even if someone were to gain access to a vault server, they would not see the plaintext passwords without being authorized.

Some other key benefits of Vault in itself, if you haven't used Vault, you get a single control plane for cloud security. You get one API to automate control and secure infrastructure and applications. Unified support across heterogeneous environments. And integrate with providers and technologies you're already using. You get automated certificate management. Every organization has got bit by a cert expiring and no one taking action to renew it before the cert expires. Because who looks at every single one of those automated emails that you get overloaded with, reminding you of every vulnerability, and maintenance their organization sends out right? I know I don't. Every organization I've worked with has sent way too many of them. So to cut down on the signal to noise ratio at your organization, what Vault can do is remove the headache around TLS certification creation, revocation and renewal, and improved security posture with automated certificate distribution and renewal. It reduces your risk by safeguarding and rotating certs across your entire fleet and ensure root keys are secure. It also supports HSM which I mentioned is purely Vault Enterprise feature. If you work in a super secure environments like gov cloud, or have to deal with regulations like GDDPR, you might need a solution that supports an HSM, which encrypts the entire solution on a hardware level. So if someone were to try to reboot Vault and hack the firmware let's say, they would need someone on site to unlock the Vault with an HSM key before they could be able to see anything involved. And my favorite is dynamic secrets. Instead of having one admin account password, or hundreds of service account passwords you need to retrieve and remember what you name them, you can have Vault create just in time secrets for your applications. Things like Kubernetes secret injection or database credentials, Vault can even rotate credentials for you on a schedule so you can stop asking questions, like where's the last time this when's the last time this password was rotated.

Lastly about Vault here is Vault has a broad ecosystem integration with various applications platforms and appliances for different use cases. Lots of different authentication methods, so you can use whatever one your organization is already using to authenticate users to Vault. And for applications that has a lot of different runtime Integrations, one that I've used many times in my career is integrating Vault with Kubernetes secrets, where the back end secret is stored within Vault. But to Kubernetes pods it acts like you're just reading a normal everyday Kubernetes secret. Now with the boring stuff out of the way let's get to why we're all here today. Upgrading Vault OSS to Enterprise. And yes that is a valid QR code that our marketing expert Liza made. Shout out to her. I'll give you guys a second if anyone wants to scan it because I know she worked hard on it.

Okay, so there are two primary methods to upgrade Vault OSS to Enterprise. First, there's an In-Place upgrade. Some advantages of an In-Place upgrade is that you can ensure the consistency of data, because the data doesn't move on upgrade. Your existing configuration tools and applications won't need to be updated, because it's going to be on the same server. It avoids any conflicts between multiple Vault servers. Some disadvantages of doing an In-Place upgrade are if you need to roll back it's going to be a manual upgrade. And if the upgrade process were to fail, the downtime would be longer than just reverting if you did new Vault Enterprise cluster. So, talking about the second version, the new Vault Enterprise cluster. Some advantages of doing it this way is the cut over is pretty quick when you do that. You can revert easily if something goes wrong. Your original cluster remains unchanged in case you need to roll back so that's where the reverting easily you can just turn off the new one and turn back on the old one. But some major disadvantages of doing this is you can have potential conflicts if both Vault servers were to accidentally be started. On the same vein if both servers somehow were running, the data could become inconsistent between them if one person's updating a secret on one server and someone's trying to update the same secret on the other. Now your database is messed up. You also have to modify your configuration tools and applications to point to a new cluster. It's not really like a huge disadvantage. It's more of a pain that you have to upgrade your Vault URL both locally on your clients and any applications that are pointing to Vault.

So, you also need to know what your storage backend is. This slide is one that's going to have the most text on it. I hate putting lots of text on slides, but it is important because before upgrading you'll need to know if your storage backend is supported. If it isn't, you need to migrate to a storage supported external backend or to Integrated Storage. Looking at this comparison here, it's easy to see why HashiCorp recommends Integrated Storage. For support, integrated is fully supported versus only Consul backend is supported as a as an external storage. For operation, integrated you don't need any additional software versus having to set up high availability with whatever external storage you choose. Let's say it's Azure or AWS, you have to set up your own HA on the back end, versus Integrated, it takes care of it for you. For networking, integrated is one less network hop versus with external storage. It depends on where your storage. Is located is it located in the same cloud even? Is it in the same server rack? Is it on a hard drive on your on your desktop? It depends on where your storage is, for how many network hops would be. For troubleshooting and monitoring, Integrated is one system that you need to monitor and troubleshoot versus having to also monitor and troubleshoot your storage back end, if something goes wrong. What if AWS, Azure, or GCP has an outage, if that's where your storage is to just storage accounts right? Then your Vault server will go down. Another thing to think about too is, let's say your external storage is hypothetically Azure. If your company decides to move everything off Azure into AWS, you'll have to migrate your storage. But with Integrated Storage, that's not an issue because the data and server on the same location so you don't have to have a multi-stage migration. You just have to migrate the server.

Some different storage back ends that you might have if you're using external storage. There's lots of options. There's Azure, AWS, GCP, Alibaba Cloud, MySQL, Cassandra etcd, Postgres, Zookeeper, Consul, and many more. And keep in mind though, if you're going to Vault Enterprise, the only supported storage backend for Vault Enterprises Consul because Consul is also owned by HashiCorp so that's why it's the only supported one. Now, for Integrated Storage, this is the alternative to external storage which is also supported by HashiCorp. Integrated Storage backend is used to persist Vault's data. Unlike other storage backends, Integrated Storage does not operate from a single source of data. Instead, all the nodes in a Vault cluster will have a replicated copy of Vault's data. Data gets replicated across all the nodes via the raft consensus algorithm. Now, I'm not going to get into the details of how the raft consensus algorithm works because it's a little complicated. But there's a resources slide at the end where I'll give you guys links if you wanted to read more about the raft consensus algorithm. Also to note, if you do want to use Manage Vault - HCP Vault, the one we're not talking about today, that does use Integrated Storage under the covers. So if you do migrate to that it would also be good to read up on how the raft consensus algorithm works to know more about your environment. Some main benefits of Integrated Storage, it's highly available and again it's officially supported by HashiCorp.

Okay, now before I get into the actual upgrade, while this webinar does show that it is possible, this webinar is not meant to be a substitute for Professional Services. You can do the upgrade at your own risk, but I recommend hiring someone who has done it before. Although the Vault binaries are the same across all their environments, every environment is unique because of different server versions or different packages installed on your servers, because company a might have an update cycle that's every month and Company B has one that's every other month or every quarter. There's lots of variables in an upgrade that that go into how well it will go for you. And so, it's good to not just rush into this and upgrade Prod just from watching this webinar.

So, step one before you touch anything, you want to take a backup. How you do the backup, depends on where your server and backend is located. If vault is running on a VM, you will take a snapshot of that VM for instance. If your backend is DynamoDB, you can snapshot the table using Vault is using. If your backend is one of the cloud storage options, you most likely have some type of backup already running. But you'll want to take a backup break before upgrading just in case someone was writing a secret right before you run this upgrade. You want to have the most updated database. Vault also does not make backwards compatibility guarantees for its data store. And this goes for upgrading an Enterprise or if you're even upgrading versions. It doesn't make backwards compatibility guarantees. So simply replacing the newly installed Vault binary with the previous version will not cleanly downgrade for you if you need to downgrade. As updates as the reason why is because upgrades may perform changes to the underlying data structure that make the data incompatible with a downgrade.

So, once you have a backup, you want to actually update your Vault binary. The first thing you need to do is stop Vault on all the nodes you have. If you have it running in high availability environment with multiple nodes, start with the followers and then the leader lasts because the leader is a source of truth. After you stop Vault, you can update the Vault binary on all your nodes with the Enterprise binary now. if you're migrating to an HSM solution, you'll need to migrate to a new Vault Enterprise servers that already have HSM enabled. You can add HSM to a server after the fact. It's very hard to do and I don't recommend it. So if you're doing that, I would just go to a server that already has HSM enabled. These are the Vault binary files and how you can tell the difference between each one. OSS at the top here will have nothing after the version. Enterprise, HSM will have plus ent.hsm. And Enterprise without HSM is just the plus ENT at the end of it.

So now that we have Vault stopped on our environment, you want to migrate the storage if in this example scenario you do have to migrate storage. But in this example, I have here, let's say we do still want to manage it ourselves; we don't want to do integrated storage. In this example, you see that we're moving from DynamoDB to Consul. Consul is the only supported external storage backend for Vault because again it's a HashiCorp product so that's why it's the only supported one. If you are on a supported backend already, this is not required to upgrade to Vault Enterprise, and you keep and you can keep your existing backend. But if you wanted to migrate your backend, this is how you would do it. You need to create a migrate.hcl file. This contains the source information and the destination configuration. So you can see the example here I have the source of DynamoDB and the data on all of the key and values of DynamoDB. And then it also has the destination of where we're going to. We're going to local Consul in this example. What this command does is, it copies data between the two storage backends for you and it also initializes the destination. So also to note, this command is meant to be run offline. So I know I said stop it first. That's very important because you don't want Vault running during this operation or you might experience data loss between the two. So after you migrate your storage, you have Vault stopped, and you've updated your binary, it's time to start Vault, and get your validation going.

So after you've upgraded your Vault binary with the Enterprise version, and you have migrated storage if needed. So you start Vault back up. You will need to apply the license from your HashiCorp representative before Vault Enterprise will start successfully. And I'll show you how to do that in the demo in a minute. After Vault has started on all your servers, you can run the following CLI commands, to verify that everything is working. You want to make sure to test every Secrets Engine to make sure all of them are working, after you've started Vault back up. Because you never know right? So the different Secrets Engines are like the key KV1 you can get a key Vault secret to check that. The transit engine you can decrypt ciphertext or encrypt ciphertext to make sure the transit engines working. The PKI engine is for certificates. You can write a PKI role, you can create a cert or read a cert to make sure that's working. And then there's the database Secrets engine. You can write these lookup IDs; you can read database creds to make sure that's working.

So we'll get in the demo in a minute. These are the resources that I was telling you about and I'll keep these up for a minute. But let's get into the demo environment here. On the top here, I have Vault server. And it's not running right now. I'll show you it running. And update the time here so you guys know I'm doing it in real time, and it's not a pre-recorded video. I'm gonna get Vault server running on the top one and this is going to act as my Vault client which is just my workstation. So the first thing we're going to do is I'm just running it locally in a Docker container. So this command is just gonna start the container and it's going to open port 8200 which is Vault support and 8201 which is the Integrated Storage port because I'm going to run Integrated Storage in my example. I have some volumes that I'm mapping here for logs file, the configs data for where the database is stored and then I'm just making an entry point of SH so I can get into the container.so now that I'm in the container, let's look at the config files. So I have the config files mounted in Vault config. And we have the config.hcl which is the config file that I'm going to be using for My Vault server. We have this license that HashiCorp has graciously gave me of a temp license for this demo. This is actually the Enterprise Vault binary that I'm going to use to upgrade it. I've already unzipped it and set it here for that case. And then we have the vault.json which is just some metadata. So let's look at the config.hcl real quick. We can see that my storage is raft which is integrated storage. And it's stored in Vault data. And this is just setting up the listener. I'm disabling mlock just because I'm running locally on a container. And then we have the API cluster address and then we are enabling the UI here. So let's get let's take a backup. Well let's start Vault first because I want to show you running. I'm gonna run Vault server-config and you can run this command in the background. You don't have to keep it running in the foreground but I'm going to keep it running in the foreground just so we can see the logs.

So this gets Vault running locally here. Let's pull up the UI real quick just so I can show you guys that it is running. And you can see at the bottom here, it is not running Vault Enterprise. I just want to show you guys that so I'm not trying to pull any smoking mirrors that it was already Vault Enterprise and I made it look easier than it actually is. So to start Vault, I need to unseal it. I had some configuration already set up in that config file so I can have some practice secrets that we can read after the upgrade. So I'm unsealing it with three keys so I can log in because when Vault starts up it starts sealed every time. And then I'm going to log in with a token. Again you can set this to whatever login method you want. It can be username and pass, LDAP or you can log in with SSO pointing to where you want. I just logged into the root token which is not recommended. It's just the root tokens given to you when you start up Vault. So you can see that it is running. I have a few Secrets engines running. I have a key Vault. I have PKI in transit running. So let's now get into the actual upgrade. Let's go back to the terminal. It is running. Here are some logs showing that I unlocked it or unsealed it. I'm going to stop it here.

And so Vault is now not running. If I refresh this, it'll say it's dead. Going back here and remember step one was taking a backup. So let's take a backup of our data just in case it goes wrong. So we go Vault operator raft because I'm using raft snapshot save. And I'm just going to call it backup.save. And we're in the Vault config one.

Oh I typo snapshot. Live demos. So it has to be running. So it does have to be running to take the backup. Let's start it up again and I'm going to exec into this container. So do a Docker PS show that it is running and actually I don't need to exec into it. I can set up my client here. So to set up your client, you need the Vault address. So I'm going to export these environment variables. And you need your token that you authenticate with. So I'm going to set those two up and I can see now locally on my client that I have Vault status. It shows that it's sealed. It's initialized so I do need to I think I need to unseal it before I can take that snapshot. Let me try to run the snapshot and we'll see if I need unseal that again.

Yep, you have to unseal it. So let's go back and unseal this again real quick just to make sure that we have the backup before I run the upgrade.

So get three keys again to unseal it and you can unseal it from the CLI as well and I'll show that a little later once we upgrade. So now it's unsealed I don't need to log into the UI, I can just do a Vault status again see that it is false now. So it isn't sealed. So now let's take that back up. So it was that quick. I wanted to make sure it's here. Yeah backup.save. Alright! So now we can stop Vault which is step two. And we can start installing the Enterprise license. So I stopped it, we ran Vault status again to see that it is not running. And you can see it's not it can't get the seal status because it's not running at all. So what we want to do in the container is you saw that I have the Vault binary in this Vault config. And bin is where the Vault binary is. So this is the open-source binary. So the first thing I want to do is I'm going to want to take a backup of that as well. You don't have to do that. I just like to because I like to back up more things just to be safe.

And I'm just going to put it back in that config folder and I'm just going to call it Vault.orig here. That's going to take a few seconds. But after that's done, because vault is stopped, all we have to do is copy the one that is in Vault config back to bin. It's gotta be that, and let's go bin, and Vault. And it's going to overwrite the existing one.

That's going to take a few seconds. Okay and that's done running. So now we can start it up and see if it's running right? Let's see what happens. There we go Vault server config and recording to the config file.

And what do you think is going to happen? Oh, it does not like to start because you do need your license checked. There's no auto loaded license and it also gives you a URL that we can go to get a trial license. So just for fun let's go to that URL. And like I showed before, I already have a license but if you come here, they're gonna ask you for your business email and information. And they're going to want that so they can start adding you to their email list and hitting you up for meetings to buy Vault. And so let's go back. I already have a license. Let's look at our license file. So let's go to Vault -> config and I have this license.hclic. It stands for HashiCorp license. So let's look at this guy. It's just a big string of random stuff. That's all we're getting here. So there's a few ways to set the license. You can set it in your config file, or you can export two different licenses. And the first one is just Vault license. So I'm just going to export it like that. And it'll set it as an environment variable. And if I just take a look to make sure that's set correctly, looks like that is that correctly Vault license. I'm also going to set it the other way which is the license path. You can set the license or the license path so you can set default license path and just point to the file. I will note and stop here that when both Vault license and the Vault license path environment variables exist, Vault license takes precedence. In addition to that, if the license path is defined in the server configuration file and Vault license or vault license path exists, the environment variables take precedence over your config file every time. So that's just a good thing to know if you need to update your license. You should have it in the file because the environment variable might also be wrong, so you got to make sure when you if your license is expiring you need to upgrade or renew your license that you update it in the correct places. So we have that running, now let's try starting Vault again. So we got Vault server running and it is running successfully now. So let's check to see if Vault is running. So now I'm going to unseal it with the CLI. And then we'll showcase that it is Vault Enterprise. So it is up and running again. We see it's sealed. We see the version here is plus ENT now. If we saw it up above it was not ENT, so you can see that as well. But let's just double check. So we're going to run Vault operator unseal to unseal it which is how you do it from the command line. So and this I have all the keys right now for the demo sake, but in a production environment if you didn't know, this is going to be run by multiple different Engineers. Ideally like three different Engineers have different keys, and you need all of them to unlock it. So it's showing one of three here. And it's also good to note that it's going to tell you that it's one of three, two of three, three or three, whether the key is right or not. It's not going to tell you if you have an invalid key. It's just gonna tell you at the end if your key is invalid so now that's two of three and let's do the last one here.

And hopefully none of these are invalid. So on the third one you see that it shows sealed false and we saw some logs running up here. If one of the keys were wrong, it would still say sealed true and it would set it back to zero of three. And it wouldn't tell you which key was wrong either. So you need to make sure that you have those and they're valid. So now if we run this is the Vault status, so now one API that we can check that I forgot to check earlier for you guys, is we can read the sys license status file. If I would have remembered to run this when it wasn't upgraded, this doesn't exist on anything but Vault Enterprise. It would have just said oh this endpoint doesn't exist. But now that we are Vault Enterprise, you can see that it's auto loaded, it expires in January, and some more details about your license here. So now let's see what it looks like in the UI. I'll refresh this so you remember it's upgrade to Vault Enterprise. And this is how you can tell down below. And now it shows the plus ENT. So I'll log in again with my root key but that's really all I had for the demo of how to upgrade. And I just wanted to know, because I'm running locally, it does look simple but it's important to note like this is a very simple install of Vault. Like I said earlier you might have different environments, different server types with different firmware versions, or package versions, so it's really dependent on your environment. And this is just a very simple install running in Docker. So, I'll go back to the PowerPoint here with slides and with the links. And I did have them open as well and they're very helpful resources. So, like here's the raft consensus algorithm, that it has a nice graphic that shows how it how it runs, and how it keeps everything in sync. And so if you wanted to read more about that, if you want to read more about Consul, if you don't know about Consul, there's a link to Consul. And then I also have the links to iTrellis and Hashicorp and learn.hashicorp.com. And then I have my LinkedIn if you want to connect and talk more about it. But let's open the floor and see if there's any questions. I didn't know if there was any Q & A. I'm turning on the mic for people, if people wanted to unmute and ask questions. But I'm gonna take a look at the Q & A and see if there's anything in there. And I don't see any questions, so we don't have any Q & A. So for those of you who want to ask questions, feel free to post on the Q & A. We'll unmute your mic. And also I want to mention that after the webinar, we will also be posting the or sharing the resources into your email, this recording as well as the resources Andrew is sharing on the screen right now. Correct, and like I said, if you wanted to talk more about upgrading or you just want to connect and talk more about some of this stuff, you can reach me on my LinkedIn, or you can reach out to Hashicorp. Either one works.

So that's all I had. So if nobody has any questions, thank you for joining us. After your lunch or during your lunch, depending on what time zone you're in. I'll see you at the lunchroom Andrew.Oh, nice demo. It's pretty good. Thanks Andrew! Thanks Jim!

Could you be able to speak more about the Raft Integration Storage? Yeah, I can pull up that that slide or the website. So yeah since we have some time. So Raft is really if you've heard of like Ceph, it's kind of like Ceph where it keeps everything in sync. And it has a leader node. And so you have to have at least three nodes because it's n plus one. And it shows in this thing, we can start it over here that like so it's updating the database on each one. And so like one of them is the leader and then it's gonna update the two. And then it's gonna tell all of the other ones that it's two, and it's checking in every once in a while to make sure they're all still set to two. It's kind of It kind of, works like blockchain where like they all have to agree to make sure that the data is updated in all the places and that it's actually correct.

Oh thanks!

Hi! Can you speak to any more steps you have to do after this initial upgrade, like setting up your DR and HA kind of clusters? Yeah so with Vault, you can set up leaders and followers. So like if you're talking about going from potentially like a single node cluster to a to a multi-node cluster, you can migrate over to a single node first, and then when you install Vault on the others, you can run like Vault operator commands to join it to the cluster. And it'll start replicating across the multiple nodes.

Again, and I'll add in for the Enterprise that you've upgraded to. The basic platform also comes with the ability to essentially do a an asynchronous replica to any other Geo. So you wouldn't want an obviously in the same region, just for business continuity purposes. And we do an active and essentially for the DR passive replication. I mean when you join, you basically set up your cluster in your other region. And it's essentially a token based join, and it'll begin to ship logs to the secondary cluster so that you can again kind of have some business continuity built into the system. There's even support for active-active for reads. Yep. And like you mentioned across regions, and I mentioned it a little bit more if you have an Enterprise that is in potentially different countries, you can set up rules so the data can't sync across the servers in the UK versus the US let's say. For sure. You have to do a little planning. That's where you know those Professional Services help. Because it is still write to a single Master zone or a single Master region.

And what's iTrellis' role here? Do you offer like third-party Professional Services? Yep yeah. So iTrellis offers third-party Professional Services. And we're a partner with HashiCorp. We are certified with HashiCorp products and we help organizations with Terraform, Vault, Consul, all of the above with HashiCorp products. Yeah we realized a couple years back, we can't scale Professional Services internally very quickly. So it's wonderful to be able to take advantage of Partners like iTrellis that help fill in for those kinds of activities.

Any other questions you might have out there? No question is too deep. We do have a Cloud platform even, we could introduce. I have another question? So when you're on open source, you don't have any namespaces. And when you're in Enterprise, you have namespaces. So how does this migration work with that? So you migrate into the default namespace which kind of comes when you get the Enterprise. You then have the capability to move different secrets into different namespaces. So in general, we structure namespaces for tenancy and or environment. And to that end, some people leverage the namespace for managing different stages and development. Whereas others manage it for you know like allowing developers to manage their own namespaces, and manage who can bring up what kinds of secret engines. Does that make sense?

Yeah and for those that as Josh mentioned, that don't know about namespaces, it's really helpful if you're running a very large organization and you want like the developers to have access to certain secrets, and the infrastructure admins to have access to others, and the network admins have access to a certain subset as well. And that's another big benefit of Vault Enterprise that you don't get. We find that when we go into orgs that have heavily embraced the open source which we love, they tend to have lots of little clusters running everywhere. So it's a nice way to consolidate all those clusters.

And Vault, well HashiCorp's website is honestly like the link to learn.HashiCorp.com that I put in here, it's honestly really awesome. And it's not just because I'm a partner, like I use this before we became Partners to learn Terraform, learn Vault. And it has lots of explanations, it has lots of demos to set up a lab if you want to play around with it. And it has learning courses that are free. You don't have to go pay some third-party website. Their documentation is really awesome. Yeah you're looking at the new version of our docs website. So what do you think of it Andrew? I think we just went live with it a couple days back. It was weird at first I noticed the change and I was like I was used the old way. But I like it now. I like the nice sidebar where you can easily search different things and you can tell which one you're on. Because I was on Enterprise and see like there's some different tutorials here like how to install a HashiCorp Enterprise license. There's a tutorial there. And it it's not just for Vault, there's Consul and Nomad as well for how to up upgrade those licenses. And here's a lab on how to set up performance replication like I mentioned where it can have performance nodes that are read heavy applications, instead of having everything be master-master. And here's like a disaster recovery setup with step-by-step instructions if I can talk. Getting like a little more off topic, how does like a large organization manage like contributing policies and adding endpoints and adding authentication back-ends to Vault in like, I don't know a mature manner?

Yeah so in terms of policies and things, we do have a public registry for Terraform where we started introducing policy as code examples. It hasn't yet extended to Vault but long term I think you know that's where we're trying to gather community assistance. In terms of though like contributing a plug-in, so within this developer platform that Andrew's showing you, there is basically some learning guides on how to develop those plugins. And then each of the individual you know product line so-call Nomad, Consul and Terraform have their own community guidelines for contributing plugins and getting them you know fully on track. I will say Terraform has been better at onboarding than Vault just because a security product tends to be a little slower because there's always concern you know about ensuring everything's super secure even on the plug-in side.

Yeah I mean that's not exactly what I was asking. Like how does a large company who has like a large Vault server with lots of users, how do they manage their internal users, manage their internal like policies for accessing Vault. Manage their addition of you know secret map points and off mount points and like do this in like a repeatable and like visible way. Sure. I see a lot of the guides talk about you know just like run Vault at this that and it seems like yeah that will work if you're like using it alone but in a large organization that seems difficult.

Andrew, you want first crack? Yeah, I can take first crack at that. As far as users, I mean it's definitely recommended to use like SSO through whatever authentication source you have if you have active directory or using like AWS auth because then you don't have two different sets of user credentials that you have to manage and remember to rotate. So that's how most large organizations that I've seen handle user auth. As far as policies between teams for a large organization that's where namespaces really come in to help. Because you can apply different policies to different namespaces and you can really get access control fine grain in those namespaces on who has access to what secrets.

Yeah and I'd say so the Enterprise Products introduce two other mechanisms. So one is you know a lot of our workflows are about getting teams to be self-service oriented. So the concept of roles can be applied in templated format with our general ACL policies so you can kind of come up with a set of policies that will be inherited based on the role that the user's authentication method joins them to. And then when they join a particular namespace, you can grant through our control groups. Basically the administrative privileges to run that namespace and authorize you know a by policy what kind of secret mounts they're allowed to mount in. And you know in addition assign namespace policies for the users that might be led into that. So it's usually a combination of ACL policies based on roles rather than based on individual authenticated members. And then through the authentication you kind of get placed into those roles based on you know the assertions or whatever other mechanisms that exist within the SSO. And then you complement it by things like our control groups to manage workflows within Vault in addition there are some extended Sentinel policies around what we call endpoint policy restrictions. And so I think it's a resource policy restriction based on the token. But there are a lot of additional tools that the Enterprise can offer to help you scale it en mass. Is there a guide for those kind of role-based policy there's some basic beginner guides within the developer or sorry within the learn tutorials. Maybe afterwards we can get you some links to that. Otherwise, you know part of purchasing Enterprise is we do have an onboarding workflow where we bring you in, you work with our what we call our customer success team. And they kind of show you all the basics and then have open office hours as well as working with Partners like iTrellis to determine what specifically you might need to do an engagement, if there are things that are you know that you want for bootstrapping typically. But yeah the open source or sorry our online documentation is not covering outside of generalities. Okay thanks. Yeah and I would add that like Terraform really makes it easier to set some of this stuff up because it can be daunting to like do all this manually. And so like the HashiCorp tools really work well together. and you can have Terraform deploy all of your Vault policies for you.

So you don't need Terraform Enterprise to do that. But you could. No you do not. I do recommend the cloud version if you do it though. Yeah. One less thing to maintain.

But good questions good questions! Anything else Jeff or anyone else on the call?

Everybody's ready for that lunch. There's nobody in Q & A. Okay.

I think that's a wrap Andrew. Awesome well thank you all for joining us today for this webinar. And like LiZa mentioned, we'll be emailing you a link to the video recording that you can look at or share with some of your co-workers if you guys are interested in Vault or iTrellis. And yeah thank you guys for joining.

iTrellis, a Technology Solutions consulting firm specializing in custom software development and design, Azure DevOps, and data analytics. Dedicated to understanding client's business strategy, and aligning appropriately skilled consultants. Learn more at iTrelis.com

Previous

Portfolio Management in Azure DevOps using Portfolio++ Hosted by Kanban University