Making the OneLogin Terraform Provider

Dominick Caponi
8 min readJun 15, 2020

Ops-ifying and Automating Access Management For Your Business

Making the World a Better Place…

Through automated access management configuration as code. Wait what? Imagine this: Your business has 30 applications it subscribes to. Your employees want a simple interface where they can select and sign into any of your company’s apps, preferably without memorizing 30 different passwords… or worse, using the same password123 password for all of them. So you sign up and manage apps through OneLogin’s portal.

Now you have 30 apps to manage and some of those apps are OIDC based while others are SAML and there are a few basic username/password apps thrown in. Each of these apps can be configured through OneLogin’s UI to change things like how they are presented, or more importantly, SAML metadata or user attribute mappings so each app knows what values on a user to use for authentication. Doing this one-at-a-time in a UI is cumbersome and has no easily accessible change record.

Enter Terraform

Typically in dev ops, infrastructure like AWS EC2 instances, VPNs, or Security Groups are managed in an automated fashion to prevent human error and ensure a consistent deployment process. Terraform enables a user to specify what resources to spin up and what the attributes of those resources are. Behind the curtain, there is a plugin Terraform uses to know what API endpoints to call to enable those AWS resources. We can use OneLogin’s APIs to make a Terraform plugin that lets you manage those 30 apps, and eventually users, and user mappings, in the same way we can manage infrastructure.

We interact with Terraform using a .tf HCL file. This is how we define the end state, or in our example, what apps we want to provide to our employees, and what they should look like. Terraform also keeps track of the previous state so if mistakes are made it’s possible to roll back. This file can be checked into your favorite version control system like git or svn which effectively gives you a change log of how apps evolve over time which makes auditing and reporting a breeze.

Example main.tf file
App resulting from the main.tf above

We Got Our Work Cut Out For Us

We don’t exactly have the simplest structures to work with when it comes to apps. In OneLogin, a single app contains a lot of information and some of that information is represented by sub-resources. A typical SAML app might look something like this.

{
"id":<id>,
"description":"an AWS app",
"allow_assumed_signin":false,
"provisioning":{"enabled":false},
"configuration":{
"certificate_id":344091,
"signature_algorithm":"SHA-256",
},
"name":"AWS Account",
"policy_id":null,
"visible":true,
"parameters":{
"https://aws.amazon.com/SAML/Attributes/Role":{
"user_attribute_macros":"",
"id":<id>,
"skip_if_blank":false,
"user_attribute_mappings":"none",
"label":"Role",
"default_values":"",
"provisioned_entitlements":false,
"values":"",
"attributes_transformations":"amazon_roles"
},
"saml_username":{
"user_attribute_macros":"",
"id":<id>,
"skip_if_blank":false,
"user_attribute_mappings":"email",
"label":"Amazon Username",
"default_values":"",
"provisioned_entitlements":false,
"values":"",
"attributes_transformations":""
}
},
"sso":{
"certificate":{
"id":<id>,
"value":"<certificate value>",
"name":"Standard Strength Certificate (2048-bit)"
},
"acs_url":"<url>",
"sls_url":"<url>",
"metadata_url":"<url>",
"issuer":"<url>"
},
"auth_method":2,
"role_ids":[1],
"auth_method_description":"SAML2.0",
"tab_id":null
}

Some sub-nodes come back as objects, others arrays, and for fields such as rules, ordering of the items in an array matters. We needed to represent this in a consistent way such that when we add more resources, it is immediately apparent what needs to happen and the consistency in the approach lends itself to making more accurate estimates and increased developer velocity.

Our path was 4-fold:

  1. Create the provider and tell it to use the OneLogin Go Client from the SDK
  2. Add resource files for each app type and implement the CRUD functions using the SDK client
  3. Come up with a way to unify inflating and flattening deeply nested shapes independent of each other
  4. Testing to meet Hashicorp requirements and ensure inflate and flatten methods work consistently

Terraform Provider Skeleton

Once we had our dependencies in go.mod we had to write the Terraform boilerplate. If you’ve written a terraform provider before, this will all be very familiar.

The main.go file is the entrypoint to our go program and starts the Terraform plugin. The plugin expects a Provider which we defined inonelogin/provider.go. The provider is what defines our API client and how we connect to it. This is also where you define the resources this provider will manage. I.E. we have our OneLogin provider managing basic apps, SAML apps and OpenID Connect (OIDC) apps as shown. Next we define the resource. I’ll focus on basic apps since SAML and OIDC apps are built in essentially the same way.

Adding a Resource

We define a Terraform resource in onelogin/resource_onelogin_apps.go following the Terraform convention of naming our resource files with resource_<provider name>_<resource>.go. The functions we have to define here are OneLoginApps as expected by the provider, and the Create, Read, Update, Delete, or, CRUD actions for interfacing with our API.

func Apps() *schema.Resource {
return &schema.Resource{
Create: appCreate,
Read: appRead,
Update: appUpdate,
Delete: appDelete,
Schema: app.Schema(),
}
}

The pattern for each method is to initialize a client, inflate the ResourceData that Terraform gives us into a OneLogin representation, then call the API with that representation, and then call the appRead. For Create and Update we call appRead at the end which will read back from the API the app we just modified, flatten it and write it into our local .tfstate file.

Handling Complex Data in Terraform

Typically, Terraform is used to manage shallow objects like AWS resources, not deeply nested objects as these can be. Therefore, we needed a clean, consistent way to Inflate and Flatten nested data and represent it in Terraform in a way that fits with how it expects data. We chose to implement a Inflate and Flatten interface for each nested object and make each construct define its own Terraform schema and return it to the next level up via a Schema method.

The Schema node on the Resource defines how Terraform will represent the app. We elected to move AppSchema into a separate folder called ol_schema/app.go. In fact, we create a file all nested resource nodes coming from the API. For instance, some Apps have parameters that come with the response.

For the example above, the Parameters are represented in ol_schema/app/parameters/parameter.go. Organizing code like this lets us focus on coding to an “interface” for each construct. That “interface” is simply an agreement to implement the following 3 signatures. Schema returns the Terraform Schema definition, Inflate converts the Terraform representation to a struct defined by the OneLogin SDK before sending to the API, and Flatten does the reverse, converting the OneLogin struct to a list of maps that Terraform can digest.
Schema()
Inflate(s map[string]interface{}) apps.AppParameters
Flatten(prov models.AppParameters) []map[string]interface{}

Doing this let’s us test each Inflate and Flatten independently which got us to 100% test coverage for all Inflate and Flatten methods. This also keeps file sizes small and enables the team to work in parallel as we add more features.

Each resource construct is responsible for defining how it needs to be represented in Terraform. For instance, the App construct defines how its fields and subfields are represented. Each subfield (e.g. parameters) is then responsible for how it is represented in Terraform using the schema.

"parameters": &schema.Schema{
Type: schema.TypeList,
Optional: true,
Computed: true,
Elem: &schema.Resource{
Schema: parameters.Schema(),
},
}

When defining nested resources like this, Terraform gives you 3 options: typeList typeSet and typeMap. For our case, given our App’s relatively complex representation, we had the best luck using the typeList. Essentially, if our App has one nested resource like a Configuration, or SSO node, it would be added to a typeList with a MaxItems of 1. Deciding to represent nested resources as typeList or as a typeList within a typeList we’re able to flatten or inflate these fields recursively which makes adding new or complex resources relatively straightforward.

Testing

Part of the submission process for Terraform involves some acceptance tests. Terraform gives you some methods for testing your providers by walking through the entire Refresh => Apply => Destroy => cycle.

We start by adding a provider_test.go file where we’ll initialize our provider for the purpose of testing and check that our provider api credentials are reachable from the environment. That is done in the TestAccPreCheck function which will halt all the tests if the credentials are not set. It is important to note that acceptance tests will create actual resources and clean them up so its important to make sure the credentials are there and you’re connected to the internet when running them.

Next for each Resource, we added a <resource>_test.go file like so:

Basically, we’re saying to Terraform to get our test provider, then load an example.tf file (we had to define this GetFixture function that returns a string that represents your HCL example file) and then run the following test steps. Each step will “apply” the HCL as if it was a change made by you. So here, it goes from Base which is the first time you’d run terraform apply and update represents the HCL file after you’ve made some changes and run terraform apply to apply an update. Then we added some checks to make sure our changes actually made it to our state file.

Since we split the notion of Terraform resources and resource constructs, we’re able to independently test those independent of terraform and achieve 100% test coverage using the standard go test command.

Conclusion

Using the main.tf below, one can now spin up and configure as many apps as needed and then check that file into version control or use it in an automated workflow to track the state of apps and how users log into them.

Example main.tf that creates, a basic, a saml, and a oidc app
Result of the above main.tf

Now that our architecture is established, we’re adding more IAM resources starting with user attribute mappings, so you can track how various user attributes get translated between your various directories and ensure a seamless on-boarding experience for all your users. Using an infrastructure-as-code tool enables you to define your access constructs and track changes to them in a single point using popular version control software. OneLogin’s Terraform provider makes it possible to view and manage your IAM infrastructure all in one place without being bogged down by a user interface.

If you spot an issue or would like to integrate OneLogin into your workflow, we offer a number of great APIs and SDKs to get you started. Check out our Developers Page to get started. Spot an issue? Check out our Github and drop us an issue and we’ll get it looked at ASAP. If you have further questions or want to chat about building your own Terraform provider or Go SDK, hit me up here or here.

--

--