Compare commits
1 Commits
master
...
add-spec-f
Author | SHA1 | Date | |
---|---|---|---|
|
68202f3c06 |
3
.gitignore
vendored
3
.gitignore
vendored
@ -1,4 +1,3 @@
|
||||
*.swp
|
||||
bin/
|
||||
coverage/
|
||||
gopath/
|
||||
pkg/
|
||||
|
12
.travis.yml
12
.travis.yml
@ -1,12 +0,0 @@
|
||||
language: go
|
||||
sudo: false
|
||||
matrix:
|
||||
include:
|
||||
- go: 1.4
|
||||
install:
|
||||
- go get golang.org/x/tools/cmd/cover
|
||||
- go get golang.org/x/tools/cmd/vet
|
||||
- go: 1.5
|
||||
|
||||
script:
|
||||
- ./test
|
@ -1,68 +0,0 @@
|
||||
# How to Contribute
|
||||
|
||||
CoreOS projects are [Apache 2.0 licensed](LICENSE) and accept contributions via
|
||||
GitHub pull requests. This document outlines some of the conventions on
|
||||
development workflow, commit message formatting, contact points and other
|
||||
resources to make it easier to get your contribution accepted.
|
||||
|
||||
# Certificate of Origin
|
||||
|
||||
By contributing to this project you agree to the Developer Certificate of
|
||||
Origin (DCO). This document was created by the Linux Kernel community and is a
|
||||
simple statement that you, as a contributor, have the legal right to make the
|
||||
contribution. See the [DCO](DCO) file for details.
|
||||
|
||||
# Email and Chat
|
||||
|
||||
The project currently uses the general CoreOS email list and IRC channel:
|
||||
- Email: [coreos-dev](https://groups.google.com/forum/#!forum/coreos-dev)
|
||||
- IRC: #[coreos](irc://irc.freenode.org:6667/#coreos) IRC channel on freenode.org
|
||||
|
||||
## Getting Started
|
||||
|
||||
- Fork the repository on GitHub
|
||||
- Read the [README](README.md) for build and test instructions
|
||||
- Play with the project, submit bugs, submit patches!
|
||||
|
||||
## Contribution Flow
|
||||
|
||||
This is a rough outline of what a contributor's workflow looks like:
|
||||
|
||||
- Create a topic branch from where you want to base your work (usually master).
|
||||
- Make commits of logical units.
|
||||
- Make sure your commit messages are in the proper format (see below).
|
||||
- Push your changes to a topic branch in your fork of the repository.
|
||||
- Make sure the tests pass, and add any new tests as appropriate.
|
||||
- Submit a pull request to the original repository.
|
||||
|
||||
Thanks for your contributions!
|
||||
|
||||
### Format of the Commit Message
|
||||
|
||||
We follow a rough convention for commit messages that is designed to answer two
|
||||
questions: what changed and why. The subject line should feature the what and
|
||||
the body of the commit should describe the why.
|
||||
|
||||
```
|
||||
environment: write new keys in consistent order
|
||||
|
||||
Go 1.3 randomizes the ordering of keys when iterating over a map.
|
||||
Sort the keys to make this ordering consistent.
|
||||
|
||||
Fixes #38
|
||||
```
|
||||
|
||||
The format can be described more formally as follows:
|
||||
|
||||
```
|
||||
<subsystem>: <what changed>
|
||||
<BLANK LINE>
|
||||
<why this change was made>
|
||||
<BLANK LINE>
|
||||
<footer>
|
||||
```
|
||||
|
||||
The first line is the subject and should be no longer than 70 characters, the
|
||||
second line is always blank, and other lines should be wrapped at 80 characters.
|
||||
This allows the message to be easier to read on GitHub as well as in various
|
||||
git tools.
|
36
DCO
36
DCO
@ -1,36 +0,0 @@
|
||||
Developer Certificate of Origin
|
||||
Version 1.1
|
||||
|
||||
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
|
||||
660 York Street, Suite 102,
|
||||
San Francisco, CA 94110 USA
|
||||
|
||||
Everyone is permitted to copy and distribute verbatim copies of this
|
||||
license document, but changing it is not allowed.
|
||||
|
||||
|
||||
Developer's Certificate of Origin 1.1
|
||||
|
||||
By making a contribution to this project, I certify that:
|
||||
|
||||
(a) The contribution was created in whole or in part by me and I
|
||||
have the right to submit it under the open source license
|
||||
indicated in the file; or
|
||||
|
||||
(b) The contribution is based upon previous work that, to the best
|
||||
of my knowledge, is covered under an appropriate open source
|
||||
license and I have the right under that license to submit that
|
||||
work with modifications, whether created in whole or in part
|
||||
by me, under the same open source license (unless I am
|
||||
permitted to submit under a different license), as indicated
|
||||
in the file; or
|
||||
|
||||
(c) The contribution was provided directly to me by some other
|
||||
person who certified (a), (b) or (c) and I have not modified
|
||||
it.
|
||||
|
||||
(d) I understand and agree that this project and the contribution
|
||||
are public and that a record of the contribution (including all
|
||||
personal information I submit with it, including my sign-off) is
|
||||
maintained indefinitely and may be redistributed consistent with
|
||||
this project or the open source license(s) involved.
|
@ -1,38 +0,0 @@
|
||||
# Deprecated Cloud-Config Features
|
||||
|
||||
## Retrieving SSH Authorized Keys
|
||||
|
||||
### From a GitHub User
|
||||
|
||||
Using the `coreos-ssh-import-github` field, we can import public SSH keys from a GitHub user to use as authorized keys to a server.
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
users:
|
||||
- name: elroy
|
||||
coreos-ssh-import-github: elroy
|
||||
```
|
||||
|
||||
### From an HTTP Endpoint
|
||||
|
||||
We can also pull public SSH keys from any HTTP endpoint which matches [GitHub's API response format](https://developer.github.com/v3/users/keys/#list-public-keys-for-a-user).
|
||||
For example, if you have an installation of GitHub Enterprise, you can provide a complete URL with an authentication token:
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
users:
|
||||
- name: elroy
|
||||
coreos-ssh-import-url: https://github-enterprise.example.com/api/v3/users/elroy/keys?access_token=<TOKEN>
|
||||
```
|
||||
|
||||
You can also specify any URL whose response matches the JSON format for public keys:
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
users:
|
||||
- name: elroy
|
||||
coreos-ssh-import-url: https://example.com/public-keys
|
||||
```
|
@ -1,26 +0,0 @@
|
||||
# Cloud-Config Locations
|
||||
|
||||
On every boot, coreos-cloudinit looks for a config file to configure your host. Here is a list of locations which are used by the Cloud-Config utility, depending on your CoreOS platform:
|
||||
|
||||
Location | Description
|
||||
--- | --- | ---
|
||||
|`/media/configvirtfs/openstack/latest/user_data`|`/media/configvirtfs` mount point with [config-2](/os/docs/latest/config-drive.html#contents-and-format) label. It should contain a `openstack/latest/user_data` relative path. Usually used by cloud providers or in VM installations.|
|
||||
|`/media/configdrive/openstack/latest/user_data`|FAT or ISO9660 filesystem with [config-2](/os/docs/latest/config-drive.html#qemu-virtfs) label and `/media/configdrive/` mount point. It should also contain a `openstack/latest/user_data` relative path. Usually used in installations which are configured by USB Flash sticks or CDROM media.|
|
||||
|Kernel command line: `cloud-config-url=http://example.com/user_data`.| You can find this string using this command `cat /proc/cmdline`. Usually used in [PXE](/os/docs/latest/booting-with-pxe.html) or [iPXE](/os/docs/latest/booting-with-ipxe.html) boots.|
|
||||
|`/var/lib/coreos-install/user_data`| When you install CoreOS manually using the [coreos-install](/os/docs/latest/installing-to-disk.html) tool. Usually used in bare metal installations.|
|
||||
|`/usr/share/oem/cloud-config.yml`| Path for OEM images.|
|
||||
|`/var/lib/coreos-vagrant/vagrantfile-user-data`| Vagrant OEM scripts automatically store Cloud-Config into this path. |
|
||||
|`/var/lib/waagent/CustomData`| Azure platform uses OEM path for first Cloud-Config initialization and then `/var/lib/waagent/CustomData` to apply your settings.|
|
||||
|`http://169.254.169.254/metadata/v1/user-data` `http://169.254.169.254/2009-04-04/user-data` `https://metadata.packet.net/userdata`|DigitalOcean, EC2 and Packet cloud providers correspondingly use these URLs to download Cloud-Config.|
|
||||
|`/usr/share/oem/bin/vmtoolsd --cmd "info-get guestinfo.coreos.config.data"`|Cloud-Config provided by [VMware Guestinfo][VMware Guestinfo]|
|
||||
|`/usr/share/oem/bin/vmtoolsd --cmd "info-get guestinfo.coreos.config.url"`|Cloud-Config URL provided by [VMware Guestinfo][VMware Guestinfo]|
|
||||
|
||||
[VMware Guestinfo]: vmware-guestinfo.md
|
||||
|
||||
You can also run the `coreos-cloudinit` tool manually and provide a path to your custom Cloud-Config file:
|
||||
|
||||
```sh
|
||||
sudo coreos-cloudinit --from-file=/home/core/cloud-config.yaml
|
||||
```
|
||||
|
||||
This command will apply your custom cloud-config.
|
@ -1,37 +0,0 @@
|
||||
## OEM configuration
|
||||
|
||||
The `coreos.oem.*` parameters follow the [os-release spec][os-release], but have been repurposed as a way for coreos-cloudinit to know about the OEM partition on this machine. Customizing this section is only needed when generating a new OEM of CoreOS from the SDK. The fields include:
|
||||
|
||||
- **id**: Lowercase string identifying the OEM
|
||||
- **name**: Human-friendly string representing the OEM
|
||||
- **version-id**: Lowercase string identifying the version of the OEM
|
||||
- **home-url**: Link to the homepage of the provider or OEM
|
||||
- **bug-report-url**: Link to a place to file bug reports about this OEM
|
||||
|
||||
coreos-cloudinit renders these fields to `/etc/oem-release`.
|
||||
If no **id** field is provided, coreos-cloudinit will ignore this section.
|
||||
|
||||
For example, the following cloud-config document...
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
coreos:
|
||||
oem:
|
||||
id: "rackspace"
|
||||
name: "Rackspace Cloud Servers"
|
||||
version-id: "168.0.0"
|
||||
home-url: "https://www.rackspace.com/cloud/servers/"
|
||||
bug-report-url: "https://github.com/coreos/coreos-overlay"
|
||||
```
|
||||
|
||||
...would be rendered to the following `/etc/oem-release`:
|
||||
|
||||
```yaml
|
||||
ID=rackspace
|
||||
NAME="Rackspace Cloud Servers"
|
||||
VERSION_ID=168.0.0
|
||||
HOME_URL="https://www.rackspace.com/cloud/servers/"
|
||||
BUG_REPORT_URL="https://github.com/coreos/coreos-overlay"
|
||||
```
|
||||
|
||||
[os-release]: http://www.freedesktop.org/software/systemd/man/os-release.html
|
@ -1,379 +1,40 @@
|
||||
# Using Cloud-Config
|
||||
# Customize CoreOS with Cloud-Config
|
||||
|
||||
CoreOS allows you to declaratively customize various OS-level items, such as network configuration, user accounts, and systemd units. This document describes the full list of items we can configure. The `coreos-cloudinit` program uses these files as it configures the OS after startup or during runtime.
|
||||
CoreOS allows you to configure machine parameters, launch systemd units on startup and more. Only a subset of [cloud-config functionality][cloud-config] is implemented. A set of custom parameters were added to the cloud-config format that are specific to CoreOS.
|
||||
|
||||
Your cloud-config is processed during each boot. Invalid cloud-config won't be processed but will be logged in the journal. You can validate your cloud-config with the [CoreOS validator]({{site.url}}/validate) or by running `coreos-cloudinit -validate`.
|
||||
|
||||
In addition to `coreos-cloudinit -validate` command and https://coreos.com/validate/ online service you can debug `coreos-cloudinit` system output through the `journalctl` tool:
|
||||
|
||||
```sh
|
||||
journalctl _EXE=/usr/bin/coreos-cloudinit
|
||||
```
|
||||
|
||||
It will show `coreos-cloudinit` run output which was triggered by system boot.
|
||||
|
||||
## Configuration File
|
||||
|
||||
The file used by this system initialization program is called a "cloud-config" file. It is inspired by the [cloud-init][cloud-init] project's [cloud-config][cloud-config] file, which is "the defacto multi-distribution package that handles early initialization of a cloud instance" ([cloud-init docs][cloud-init-docs]). Because the cloud-init project includes tools which aren't used by CoreOS, only the relevant subset of its configuration items will be implemented in our cloud-config file. In addition to those, we added a few CoreOS-specific items, such as etcd configuration, OEM definition, and systemd units.
|
||||
|
||||
We've designed our implementation to allow the same cloud-config file to work across all of our supported platforms.
|
||||
|
||||
[cloud-init]: https://launchpad.net/cloud-init
|
||||
[cloud-init-docs]: http://cloudinit.readthedocs.org/en/latest/index.html
|
||||
[cloud-config]: http://cloudinit.readthedocs.org/en/latest/topics/format.html#cloud-config-data
|
||||
|
||||
### File Format
|
||||
|
||||
The cloud-config file uses the [YAML][yaml] file format, which uses whitespace and new-lines to delimit lists, associative arrays, and values.
|
||||
|
||||
A cloud-config file must contain a header: either `#cloud-config` for processing as cloud-config (suggested) or `#!` for processing as a shell script (advanced). If cloud-config has #cloud-config header, it should followed by an associative array which has zero or more of the following keys:
|
||||
|
||||
- `coreos`
|
||||
- `ssh_authorized_keys`
|
||||
- `hostname`
|
||||
- `users`
|
||||
- `write_files`
|
||||
- `manage_etc_hosts`
|
||||
|
||||
The expected values for these keys are defined in the rest of this document.
|
||||
|
||||
If cloud-config header starts on `#!` then coreos-cloudinit will recognize it as shell script which is interpreted by bash and run it as transient systemd service.
|
||||
|
||||
[yaml]: https://en.wikipedia.org/wiki/YAML
|
||||
|
||||
### Providing Cloud-Config with Config-Drive
|
||||
|
||||
CoreOS tries to conform to each platform's native method to provide user data. Each cloud provider tends to be unique, but this complexity has been abstracted by CoreOS. You can view each platform's instructions on their documentation pages. The most universal way to provide cloud-config is [via config-drive](https://github.com/coreos/coreos-cloudinit/blob/master/Documentation/config-drive.md), which attaches a read-only device to the machine, that contains your cloud-config file.
|
||||
|
||||
## Configuration Parameters
|
||||
|
||||
### coreos
|
||||
|
||||
#### etcd (deprecated. see etcd2)
|
||||
|
||||
The `coreos.etcd.*` parameters will be translated to a partial systemd unit acting as an etcd configuration file.
|
||||
If the platform environment supports the templating feature of coreos-cloudinit it is possible to automate etcd configuration with the `$private_ipv4` and `$public_ipv4` fields. For example, the following cloud-config document...
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
coreos:
|
||||
etcd:
|
||||
name: "node001"
|
||||
# generate a new token for each unique cluster from https://discovery.etcd.io/new
|
||||
discovery: "https://discovery.etcd.io/<token>"
|
||||
# multi-region and multi-cloud deployments need to use $public_ipv4
|
||||
addr: "$public_ipv4:4001"
|
||||
peer-addr: "$private_ipv4:7001"
|
||||
```
|
||||
|
||||
...will generate a systemd unit drop-in for etcd.service with the following contents:
|
||||
|
||||
```yaml
|
||||
[Service]
|
||||
Environment="ETCD_NAME=node001"
|
||||
Environment="ETCD_DISCOVERY=https://discovery.etcd.io/<token>"
|
||||
Environment="ETCD_ADDR=203.0.113.29:4001"
|
||||
Environment="ETCD_PEER_ADDR=192.0.2.13:7001"
|
||||
```
|
||||
|
||||
For more information about the available configuration parameters, see the [etcd documentation][etcd-config].
|
||||
|
||||
_Note: The `$private_ipv4` and `$public_ipv4` substitution variables referenced in other documents are only supported on Amazon EC2, Google Compute Engine, OpenStack, Rackspace, DigitalOcean, and Vagrant._
|
||||
|
||||
[etcd-config]: https://github.com/coreos/etcd/blob/release-0.4/Documentation/configuration.md
|
||||
|
||||
#### etcd2
|
||||
|
||||
The `coreos.etcd2.*` parameters will be translated to a partial systemd unit acting as an etcd configuration file.
|
||||
If the platform environment supports the templating feature of coreos-cloudinit it is possible to automate etcd configuration with the `$private_ipv4` and `$public_ipv4` fields. When generating a [discovery token](https://discovery.etcd.io/new?size=3), set the `size` parameter, since etcd uses this to determine if all members have joined the cluster. After the cluster is bootstrapped, it can grow or shrink from this configured size.
|
||||
|
||||
For example, the following cloud-config document...
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
coreos:
|
||||
etcd2:
|
||||
# generate a new token for each unique cluster from https://discovery.etcd.io/new?size=3
|
||||
discovery: "https://discovery.etcd.io/<token>"
|
||||
# multi-region and multi-cloud deployments need to use $public_ipv4
|
||||
advertise-client-urls: "http://$public_ipv4:2379"
|
||||
initial-advertise-peer-urls: "http://$private_ipv4:2380"
|
||||
# listen on both the official ports and the legacy ports
|
||||
# legacy ports can be omitted if your application doesn't depend on them
|
||||
listen-client-urls: "http://0.0.0.0:2379,http://0.0.0.0:4001"
|
||||
listen-peer-urls: "http://$private_ipv4:2380,http://$private_ipv4:7001"
|
||||
```
|
||||
|
||||
...will generate a systemd unit drop-in for etcd2.service with the following contents:
|
||||
|
||||
```yaml
|
||||
[Service]
|
||||
Environment="ETCD_DISCOVERY=https://discovery.etcd.io/<token>"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=http://203.0.113.29:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=http://192.0.2.13:2380"
|
||||
Environment="ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379,http://0.0.0.0:4001"
|
||||
Environment="ETCD_LISTEN_PEER_URLS=http://192.0.2.13:2380,http://192.0.2.13:7001"
|
||||
```
|
||||
|
||||
For more information about the available configuration parameters, see the [etcd2 documentation][etcd2-config].
|
||||
|
||||
_Note: The `$private_ipv4` and `$public_ipv4` substitution variables referenced in other documents are only supported on Amazon EC2, Google Compute Engine, OpenStack, Rackspace, DigitalOcean, and Vagrant._
|
||||
|
||||
[etcd2-config]: https://github.com/coreos/etcd/blob/master/Documentation/configuration.md
|
||||
|
||||
#### fleet
|
||||
|
||||
The `coreos.fleet.*` parameters work very similarly to `coreos.etcd2.*`, and allow for the configuration of fleet through environment variables. For example, the following cloud-config document...
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
coreos:
|
||||
fleet:
|
||||
public-ip: "$public_ipv4"
|
||||
metadata: "region=us-west"
|
||||
```
|
||||
|
||||
...will generate a systemd unit drop-in like this:
|
||||
|
||||
```yaml
|
||||
[Service]
|
||||
Environment="FLEET_PUBLIC_IP=203.0.113.29"
|
||||
Environment="FLEET_METADATA=region=us-west"
|
||||
```
|
||||
|
||||
List of fleet configuration parameters:
|
||||
|
||||
- **agent_ttl**: An Agent will be considered dead if it exceeds this amount of time to communicate with the Registry
|
||||
- **engine_reconcile_interval**: Interval in seconds at which the engine should reconcile the cluster schedule in etcd
|
||||
- **etcd_cafile**: Path to CA file used for TLS communication with etcd
|
||||
- **etcd_certfile**: Provide TLS configuration when SSL certificate authentication is enabled in etcd endpoints
|
||||
- **etcd_keyfile**: Path to private key file used for TLS communication with etcd
|
||||
- **etcd_key_prefix**: etcd prefix path to be used for fleet keys
|
||||
- **etcd_request_timeout**: Amount of time in seconds to allow a single etcd request before considering it failed
|
||||
- **etcd_servers**: Comma separated list of etcd endpoints
|
||||
- **metadata**: Comma separated key/value pairs that are published with the local to the fleet registry
|
||||
- **public_ip**: IP accessible by other nodes for inter-host communication
|
||||
- **verbosity**: Enable debug logging by setting this to an integer value greater than zero
|
||||
|
||||
For more information on fleet configuration, see the [fleet documentation][fleet-config].
|
||||
|
||||
[fleet-config]: https://github.com/coreos/fleet/blob/master/Documentation/deployment-and-configuration.md#configuration
|
||||
|
||||
#### flannel
|
||||
|
||||
The `coreos.flannel.*` parameters also work very similarly to `coreos.etcd2.*`
|
||||
and `coreos.fleet.*`. They can be used to set environment variables for
|
||||
flanneld. For example, the following cloud-config...
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
coreos:
|
||||
flannel:
|
||||
etcd_prefix: "/coreos.com/network2"
|
||||
```
|
||||
|
||||
...will generate a systemd unit drop-in like so:
|
||||
|
||||
```
|
||||
[Service]
|
||||
Environment="FLANNELD_ETCD_PREFIX=/coreos.com/network2"
|
||||
```
|
||||
|
||||
List of flannel configuration parameters:
|
||||
|
||||
- **etcd_endpoints**: Comma separated list of etcd endpoints
|
||||
- **etcd_cafile**: Path to CA file used for TLS communication with etcd
|
||||
- **etcd_certfile**: Path to certificate file used for TLS communication with etcd
|
||||
- **etcd_keyfile**: Path to private key file used for TLS communication with etcd
|
||||
- **etcd_prefix**: etcd prefix path to be used for flannel keys
|
||||
- **ip_masq**: Install IP masquerade rules for traffic outside of flannel subnet
|
||||
- **subnet_file**: Path to flannel subnet file to write out
|
||||
- **interface**: Interface (name or IP) that should be used for inter-host communication
|
||||
- **public_ip**: IP accessible by other nodes for inter-host communication
|
||||
|
||||
For more information on flannel configuration, see the [flannel documentation][flannel-readme].
|
||||
|
||||
[flannel-readme]: https://github.com/coreos/flannel/blob/master/README.md
|
||||
|
||||
#### locksmith
|
||||
|
||||
The `coreos.locksmith.*` parameters can be used to set environment variables
|
||||
for locksmith. For example, the following cloud-config...
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
coreos:
|
||||
locksmith:
|
||||
endpoint: "http://example.com:2379"
|
||||
```
|
||||
|
||||
...will generate a systemd unit drop-in like so:
|
||||
|
||||
```
|
||||
[Service]
|
||||
Environment="LOCKSMITHD_ENDPOINT=http://example.com:2379"
|
||||
```
|
||||
|
||||
List of locksmith configuration parameters:
|
||||
|
||||
- **endpoint**: Comma separated list of etcd endpoints
|
||||
- **etcd_cafile**: Path to CA file used for TLS communication with etcd
|
||||
- **etcd_certfile**: Path to certificate file used for TLS communication with etcd
|
||||
- **etcd_keyfile**: Path to private key file used for TLS communication with etcd
|
||||
|
||||
For the complete list of locksmith configuration parameters, see the [locksmith documentation][locksmith-readme].
|
||||
|
||||
[locksmith-readme]: https://github.com/coreos/locksmith/blob/master/README.md
|
||||
|
||||
#### update
|
||||
|
||||
The `coreos.update.*` parameters manipulate settings related to how CoreOS instances are updated.
|
||||
|
||||
These fields will be written out to and replace `/etc/coreos/update.conf`. If only one of the parameters is given it will only overwrite the given field.
|
||||
The `reboot-strategy` parameter also affects the behaviour of [locksmith](https://github.com/coreos/locksmith).
|
||||
|
||||
- **reboot-strategy**: One of "reboot", "etcd-lock", "best-effort" or "off" for controlling when reboots are issued after an update is performed.
|
||||
- _reboot_: Reboot immediately after an update is applied.
|
||||
- _etcd-lock_: Reboot after first taking a distributed lock in etcd, this guarantees that only one host will reboot concurrently and that the cluster will remain available during the update.
|
||||
- _best-effort_ - If etcd is running, "etcd-lock", otherwise simply "reboot".
|
||||
- _off_ - Disable rebooting after updates are applied (not recommended).
|
||||
- **server**: The location of the [CoreUpdate][coreupdate] server which will be queried for updates. Also known as the [omaha][omaha-docs] server endpoint.
|
||||
- **group**: signifies the channel which should be used for automatic updates. This value defaults to the version of the image initially downloaded. (one of "master", "alpha", "beta", "stable")
|
||||
|
||||
[coreupdate]: https://coreos.com/products/coreupdate
|
||||
[omaha-docs]: https://coreos.com/docs/coreupdate/custom-apps/coreupdate-protocol/
|
||||
|
||||
*Note: cloudinit will only manipulate the locksmith unit file in the systemd runtime directory (`/run/systemd/system/locksmithd.service`). If any manual modifications are made to an overriding unit configuration file (e.g. `/etc/systemd/system/locksmithd.service`), cloudinit will no longer be able to control the locksmith service unit.*
|
||||
|
||||
##### Example
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
coreos:
|
||||
update:
|
||||
reboot-strategy: "etcd-lock"
|
||||
```
|
||||
|
||||
#### units
|
||||
|
||||
The `coreos.units.*` parameters define a list of arbitrary systemd units to start after booting. This feature is intended to help you start essential services required to mount storage and configure networking in order to join the CoreOS cluster. It is not intended to be a Chef/Puppet replacement.
|
||||
|
||||
Each item is an object with the following fields:
|
||||
|
||||
- **name**: String representing unit's name. Required.
|
||||
- **runtime**: Boolean indicating whether or not to persist the unit across reboots. This is analogous to the `--runtime` argument to `systemctl enable`. The default value is false.
|
||||
- **enable**: Boolean indicating whether or not to handle the [Install] section of the unit file. This is similar to running `systemctl enable <name>`. The default value is false.
|
||||
- **content**: Plaintext string representing entire unit file. If no value is provided, the unit is assumed to exist already.
|
||||
- **command**: Command to execute on unit: start, stop, reload, restart, try-restart, reload-or-restart, reload-or-try-restart. The default behavior is to not execute any commands.
|
||||
- **mask**: Whether to mask the unit file by symlinking it to `/dev/null` (analogous to `systemctl mask <name>`). Note that unlike `systemctl mask`, **this will destructively remove any existing unit file** located at `/etc/systemd/system/<unit>`, to ensure that the mask succeeds. The default value is false.
|
||||
- **drop-ins**: A list of unit drop-ins with the following fields:
|
||||
- **name**: String representing unit's name. Required.
|
||||
- **content**: Plaintext string representing entire file. Required.
|
||||
|
||||
|
||||
**NOTE:** The command field is ignored for all network, netdev, and link units. The systemd-networkd.service unit will be restarted in their place.
|
||||
|
||||
##### Examples
|
||||
|
||||
Write a unit to disk, automatically starting it.
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
coreos:
|
||||
units:
|
||||
- name: "docker-redis.service"
|
||||
command: "start"
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Redis container
|
||||
Author=Me
|
||||
After=docker.service
|
||||
|
||||
[Service]
|
||||
Restart=always
|
||||
ExecStart=/usr/bin/docker start -a redis_server
|
||||
ExecStop=/usr/bin/docker stop -t 2 redis_server
|
||||
```
|
||||
|
||||
Add the DOCKER_OPTS environment variable to docker.service.
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
coreos:
|
||||
units:
|
||||
- name: "docker.service"
|
||||
drop-ins:
|
||||
- name: "50-insecure-registry.conf"
|
||||
content: |
|
||||
[Service]
|
||||
Environment=DOCKER_OPTS='--insecure-registry="10.0.1.0/24"'
|
||||
```
|
||||
|
||||
Start the built-in `etcd2` and `fleet` services:
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
coreos:
|
||||
units:
|
||||
- name: "etcd2.service"
|
||||
command: "start"
|
||||
- name: "fleet.service"
|
||||
command: "start"
|
||||
```
|
||||
## Supported cloud-config Parameters
|
||||
|
||||
### ssh_authorized_keys
|
||||
|
||||
The `ssh_authorized_keys` parameter adds public SSH keys which will be authorized for the `core` user.
|
||||
Provided public SSH keys will be authorized for the `core` user.
|
||||
|
||||
The keys will be named "coreos-cloudinit" by default.
|
||||
Override this by using the `--ssh-key-name` flag when calling `coreos-cloudinit`.
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
ssh_authorized_keys:
|
||||
- "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h..."
|
||||
```
|
||||
Override this with the `--ssh-key-name` flag when calling `coreos-cloudinit`.
|
||||
|
||||
### hostname
|
||||
|
||||
The `hostname` parameter defines the system's hostname.
|
||||
The provided value will be used to set the system's hostname.
|
||||
This is the local part of a fully-qualified domain name (i.e. `foo` in `foo.example.com`).
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
hostname: "coreos1"
|
||||
```
|
||||
|
||||
### users
|
||||
|
||||
The `users` parameter adds or modifies the specified list of users. Each user is an object which consists of the following fields. Each field is optional and of type string unless otherwise noted.
|
||||
Add or modify users with the `users` directive by providing a list of user objects, each consisting of the following fields.
|
||||
Each field is optional and of type string unless otherwise noted.
|
||||
All but the `passwd` and `ssh-authorized-keys` fields will be ignored if the user already exists.
|
||||
|
||||
- **name**: Required. Login name of user
|
||||
- **gecos**: GECOS comment of user
|
||||
- **passwd**: Hash of the password to use for this user
|
||||
- **homedir**: User's home directory. Defaults to /home/\<name\>
|
||||
- **no-create-home**: Boolean. Skip home directory creation.
|
||||
- **homedir**: User's home directory. Defaults to /home/<name>
|
||||
- **no-create-home**: Boolean. Skip home directory createion.
|
||||
- **primary-group**: Default group for the user. Defaults to a new group created named after the user.
|
||||
- **groups**: Add user to these additional groups
|
||||
- **no-user-group**: Boolean. Skip default group creation.
|
||||
- **ssh-authorized-keys**: List of public SSH keys to authorize for this user
|
||||
- **coreos-ssh-import-github** [DEPRECATED]: Authorize SSH keys from GitHub user
|
||||
- **coreos-ssh-import-github-users** [DEPRECATED]: Authorize SSH keys from a list of GitHub users
|
||||
- **coreos-ssh-import-url** [DEPRECATED]: Authorize SSH keys imported from a url endpoint.
|
||||
- **system**: Create the user as a system user. No home directory will be created.
|
||||
- **no-log-init**: Boolean. Skip initialization of lastlog and faillog databases.
|
||||
- **shell**: User's login shell.
|
||||
|
||||
The following fields are not yet implemented:
|
||||
|
||||
@ -383,96 +44,153 @@ The following fields are not yet implemented:
|
||||
- **selinux-user**: Corresponding SELinux user
|
||||
- **ssh-import-id**: Import SSH keys by ID from Launchpad.
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
##### Generating a password hash
|
||||
|
||||
users:
|
||||
- name: "elroy"
|
||||
passwd: "$6$5s2u6/jR$un0AvWnqilcgaNB3Mkxd5yYv6mTlWfOoCYHZmfi3LDKVltj.E8XNKEcwWm..."
|
||||
groups:
|
||||
- "sudo"
|
||||
- "docker"
|
||||
ssh-authorized-keys:
|
||||
- "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h..."
|
||||
```
|
||||
Generating a safe hash is important to the security of your system. Currently with updated tools like [oclhashcat](http://hashcat.net/oclhashcat/) simplified hashes like md5crypt are trivial to crack on modern GPU hardware. You can generate a "safer" hash (read: not safe, never publish your hashes publicly) via:
|
||||
|
||||
#### Generating a password hash
|
||||
###### On Debian/Ubuntu (via the package "whois")
|
||||
mkpasswd --method=SHA-512 --rounds=4096
|
||||
|
||||
If you choose to use a password instead of an SSH key, generating a safe hash is extremely important to the security of your system. Simplified hashes like md5crypt are trivial to crack on modern GPU hardware. Here are a few ways to generate secure hashes:
|
||||
###### With OpenSSL (note: this will only make md5crypt. While better than plantext it should not be considered fully secure)
|
||||
openssl passwd -1
|
||||
|
||||
```
|
||||
# On Debian/Ubuntu (via the package "whois")
|
||||
mkpasswd --method=SHA-512 --rounds=4096
|
||||
###### With Python (change password and salt values)
|
||||
python -c "import crypt, getpass, pwd; print crypt.crypt('password', '\$6\$SALT\$')"
|
||||
|
||||
# OpenSSL (note: this will only make md5crypt. While better than plantext it should not be considered fully secure)
|
||||
openssl passwd -1
|
||||
|
||||
# Python (change password and salt values)
|
||||
python -c "import crypt, getpass, pwd; print crypt.crypt('password', '\$6\$SALT\$')"
|
||||
|
||||
# Perl (change password and salt values)
|
||||
perl -e 'print crypt("password","\$6\$SALT\$") . "\n"'
|
||||
```
|
||||
###### With Perl (change password and salt values)
|
||||
perl -e 'print crypt("password","\$6\$SALT\$") . "\n"'
|
||||
|
||||
Using a higher number of rounds will help create more secure passwords, but given enough time, password hashes can be reversed. On most RPM based distributions there is a tool called mkpasswd available in the `expect` package, but this does not handle "rounds" nor advanced hashing algorithms.
|
||||
|
||||
### write_files
|
||||
|
||||
The `write_files` directive defines a set of files to create on the local filesystem.
|
||||
Each item in the list may have the following keys:
|
||||
Inject an arbitrary set of files to the local filesystem.
|
||||
Provide a list of objects with the following attributes:
|
||||
|
||||
- **path**: Absolute location on disk where contents should be written
|
||||
- **content**: Data to write at the provided `path`
|
||||
- **permissions**: Integer representing file permissions, typically in octal notation (i.e. 0644)
|
||||
- **permissions**: String representing file permissions in octal notation (i.e. '0644')
|
||||
- **owner**: User and group that should own the file written to disk. This is equivalent to the `<user>:<group>` argument to `chown <user>:<group> <path>`.
|
||||
- **encoding**: Optional. The encoding of the data in content. If not specified this defaults to the yaml document encoding (usually utf-8). Supported encoding types are:
|
||||
- **b64, base64**: Base64 encoded content
|
||||
- **gz, gzip**: gzip encoded content, for use with the !!binary tag
|
||||
- **gz+b64, gz+base64, gzip+b64, gzip+base64**: Base64 encoded gzip content
|
||||
|
||||
## Custom cloud-config Parameters
|
||||
|
||||
```yaml
|
||||
### coreos.oem
|
||||
|
||||
These fields are borrowed from the [os-release spec][os-release] and repurposed
|
||||
as a way for cloud-init to know about the OEM partition on this machine.
|
||||
|
||||
- **id**: A lower case string identifying the oem.
|
||||
- **version-id**: A lower case string identifying the version of the OEM. Example: `168.0.0`
|
||||
- **name**: A name without the version that is suitable for presentation to the user.
|
||||
- **home-url**: Link to the homepage of the provider or OEM.
|
||||
- **bug-report-url***: Link to a place to file bug reports about this OEM partition.
|
||||
|
||||
cloudinit must render these fields down to an /etc/oem-release file on disk in the following format:
|
||||
|
||||
```
|
||||
NAME=Rackspace
|
||||
ID=rackspace
|
||||
VERSION_ID=168.0.0
|
||||
PRETTY_NAME="Rackspace Cloud Servers"
|
||||
HOME_URL="http://www.rackspace.com/cloud/servers/"
|
||||
BUG_REPORT_URL="https://github.com/coreos/coreos-overlay"
|
||||
```
|
||||
|
||||
[os-release]: http://www.freedesktop.org/software/systemd/man/os-release.html
|
||||
|
||||
### coreos.etcd.discovery_url
|
||||
|
||||
The value of `coreos.etcd.discovery_url` will be used to discover the instance's etcd peers using the [etcd discovery protocol][disco-proto]. Usage of the [public discovery service][disco-service] is encouraged.
|
||||
|
||||
[disco-proto]: https://github.com/coreos/etcd/blob/master/Documentation/discovery-protocol.md
|
||||
[disco-service]: http://discovery.etcd.io
|
||||
|
||||
### coreos.units
|
||||
|
||||
Arbitrary systemd units may be provided in the `coreos.units` attribute.
|
||||
`coreos.units` is a list of objects with the following fields:
|
||||
|
||||
- **name**: string representing unit's name
|
||||
- **runtime**: boolean indicating whether or not to persist the unit across reboots. This is analagous to the `--runtime` flag to `systemd enable`.
|
||||
- **content**: plaintext string representing entire unit file
|
||||
|
||||
See docker example below.
|
||||
|
||||
## user-data Script
|
||||
|
||||
Simply set your user-data to a script where the first line is a shebang:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
echo 'Hello, world!'
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Inject an SSH key, bootstrap etcd, and start fleet
|
||||
```
|
||||
#cloud-config
|
||||
|
||||
coreos:
|
||||
etcd:
|
||||
discovery_url: https://discovery.etcd.io/827c73219eeb2fa5530027c37bf18877
|
||||
fleet:
|
||||
autostart: yes
|
||||
ssh_authorized_keys:
|
||||
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h...
|
||||
```
|
||||
|
||||
### Start a docker container on boot
|
||||
|
||||
```
|
||||
#cloud-config
|
||||
|
||||
coreos:
|
||||
units:
|
||||
- name: docker-redis.service
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Redis container
|
||||
Author=Me
|
||||
After=docker.service
|
||||
|
||||
[Service]
|
||||
Restart=always
|
||||
ExecStart=/usr/bin/docker start -a redis_server
|
||||
ExecStop=/usr/bin/docker stop -t 2 redis_server
|
||||
|
||||
[Install]
|
||||
WantedBy=local.target
|
||||
```
|
||||
|
||||
### Add a user
|
||||
|
||||
```
|
||||
#cloud-config
|
||||
|
||||
users:
|
||||
- name: elroy
|
||||
passwd: $6$5s2u6/jR$un0AvWnqilcgaNB3Mkxd5yYv6mTlWfOoCYHZmfi3LDKVltj.E8XNKEcwWm...
|
||||
groups:
|
||||
- staff
|
||||
- docker
|
||||
ssh-authorized-keys:
|
||||
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h...
|
||||
```
|
||||
|
||||
### Inject configuration files
|
||||
|
||||
```
|
||||
#cloud-config
|
||||
|
||||
write_files:
|
||||
- path: "/etc/resolv.conf"
|
||||
permissions: "0644"
|
||||
owner: "root"
|
||||
content: |
|
||||
nameserver 8.8.8.8
|
||||
- path: "/etc/motd"
|
||||
permissions: "0644"
|
||||
owner: "root"
|
||||
content: |
|
||||
Good news, everyone!
|
||||
- path: "/tmp/like_this"
|
||||
permissions: "0644"
|
||||
owner: "root"
|
||||
encoding: "gzip"
|
||||
content: !!binary |
|
||||
H4sIAKgdh1QAAwtITM5WyK1USMqvUCjPLMlQSMssS1VIya9KzVPIySwszS9SyCpNLwYARQFQ5CcAAAA=
|
||||
- path: "/tmp/or_like_this"
|
||||
permissions: "0644"
|
||||
owner: "root"
|
||||
encoding: "gzip+base64"
|
||||
content: |
|
||||
H4sIAKgdh1QAAwtITM5WyK1USMqvUCjPLMlQSMssS1VIya9KzVPIySwszS9SyCpNLwYARQFQ5CcAAAA=
|
||||
- path: "/tmp/todolist"
|
||||
permissions: "0644"
|
||||
owner: "root"
|
||||
encoding: "base64"
|
||||
content: |
|
||||
UGFjayBteSBib3ggd2l0aCBmaXZlIGRvemVuIGxpcXVvciBqdWdz
|
||||
```
|
||||
|
||||
### manage_etc_hosts
|
||||
|
||||
The `manage_etc_hosts` parameter configures the contents of the `/etc/hosts` file, which is used for local name resolution.
|
||||
Currently, the only supported value is "localhost" which will cause your system's hostname
|
||||
to resolve to "127.0.0.1". This is helpful when the host does not have DNS
|
||||
infrastructure in place to resolve its own hostname, for example, when using Vagrant.
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
manage_etc_hosts: "localhost"
|
||||
- path: /etc/hosts
|
||||
contents: |
|
||||
127.0.0.1 localhost
|
||||
192.0.2.211 buildbox
|
||||
- path: /etc/resolv.conf
|
||||
contents: |
|
||||
nameserver 192.0.2.13
|
||||
nameserver 192.0.2.14
|
||||
```
|
||||
|
@ -1,40 +0,0 @@
|
||||
# Distribution via Config Drive
|
||||
|
||||
CoreOS supports providing configuration data via [config drive][config-drive]
|
||||
disk images. Currently only providing a single script or cloud config file is
|
||||
supported.
|
||||
|
||||
[config-drive]: http://docs.openstack.org/user-guide/cli_config_drive.html
|
||||
|
||||
## Contents and Format
|
||||
|
||||
The image should be a single FAT or ISO9660 file system with the label
|
||||
`config-2` and the configuration data should be located at
|
||||
`openstack/latest/user_data`.
|
||||
|
||||
For example, to wrap up a config named `user_data` in a config drive image:
|
||||
|
||||
```sh
|
||||
mkdir -p /tmp/new-drive/openstack/latest
|
||||
cp user_data /tmp/new-drive/openstack/latest/user_data
|
||||
mkisofs -R -V config-2 -o configdrive.iso /tmp/new-drive
|
||||
rm -r /tmp/new-drive
|
||||
```
|
||||
|
||||
If on OS X, replace the `mkisofs` invocation with:
|
||||
|
||||
```sh
|
||||
hdiutil makehybrid -iso -joliet -default-volume-name config-2 -o configdrive.iso /tmp/new-drive
|
||||
```
|
||||
|
||||
## QEMU virtfs
|
||||
|
||||
One exception to the above, when using QEMU it is possible to skip creating an
|
||||
image and use a plain directory containing the same contents:
|
||||
|
||||
```sh
|
||||
qemu-system-x86_64 \
|
||||
-fsdev local,id=conf,security_model=none,readonly,path=/tmp/new-drive \
|
||||
-device virtio-9p-pci,fsdev=conf,mount_tag=config-2 \
|
||||
[usual qemu options here...]
|
||||
```
|
@ -1,27 +0,0 @@
|
||||
#Debian Interfaces#
|
||||
**WARNING**: This option is EXPERIMENTAL and may change or be removed at any
|
||||
point.
|
||||
There is basic support for converting from a Debian network configuration to
|
||||
networkd unit files. The -convert-netconf=debian option is used to activate
|
||||
this feature.
|
||||
|
||||
#convert-netconf#
|
||||
Default: ""
|
||||
Read the network config provided in cloud-drive and translate it from the
|
||||
specified format into networkd unit files (requires the -from-configdrive
|
||||
flag). Currently only supports "debian" which provides support for a small
|
||||
subset of the [Debian network configuration]
|
||||
(https://wiki.debian.org/NetworkConfiguration). These options include:
|
||||
|
||||
- interface config methods
|
||||
- static
|
||||
- address/netmask
|
||||
- gateway
|
||||
- hwaddress
|
||||
- dns-nameservers
|
||||
- dhcp
|
||||
- hwaddress
|
||||
- manual
|
||||
- loopback
|
||||
- vlan_raw_device
|
||||
- bond-slaves
|
@ -1,35 +0,0 @@
|
||||
# VMWare Guestinfo Interface
|
||||
|
||||
## Cloud-Config VMWare Guestinfo Variables
|
||||
|
||||
coreos-cloudinit accepts configuration from the VMware RPC API's *guestinfo*
|
||||
facility. This datasource can be enabled with the `--from-vmware-guestinfo`
|
||||
flag to coreos-cloudinit.
|
||||
|
||||
The following guestinfo variables are recognized and processed by cloudinit
|
||||
when passed from the hypervisor to the virtual machine at boot time. Note that
|
||||
property names are prefixed with `guestinfo.` in the VMX, e.g., `guestinfo.hostname`.
|
||||
|
||||
| guestinfo variable | type |
|
||||
|:--------------------------------------|:--------------------------------|
|
||||
| `hostname` | `hostname` |
|
||||
| `interface.<n>.name` | `string` |
|
||||
| `interface.<n>.mac` | `MAC address` |
|
||||
| `interface.<n>.dhcp` | `{"yes", "no"}` |
|
||||
| `interface.<n>.role` | `{"public", "private"}` |
|
||||
| `interface.<n>.ip.<m>.address` | `CIDR IP address` |
|
||||
| `interface.<n>.route.<l>.gateway` | `IP address` |
|
||||
| `interface.<n>.route.<l>.destination` | `CIDR IP address` |
|
||||
| `dns.server.<x>` | `IP address` |
|
||||
| `coreos.config.data` | `string` |
|
||||
| `coreos.config.data.encoding` | `{"", "base64", "gzip+base64"}` |
|
||||
| `coreos.config.url` | `URL` |
|
||||
|
||||
Note: "n", "m", "l", and "x" are 0-indexed, incrementing integers. The
|
||||
identifier for an `interface` does not correspond to anything outside of this
|
||||
configuration; it serves only to distinguish between multiple `interface`s.
|
||||
|
||||
The guide to [booting on VMWare][bootvmware] is the starting point for more
|
||||
information about configuring and running CoreOS on VMWare.
|
||||
|
||||
[bootvmware]: https://github.com/coreos/docs/blob/master/os/booting-on-vmware.md
|
202
LICENSE
202
LICENSE
@ -1,202 +0,0 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright {yyyy} {name of copyright owner}
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
@ -1,3 +0,0 @@
|
||||
Alex Crawford <alex.crawford@coreos.com> (@crawford)
|
||||
Jonathan Boulle <jonathan.boulle@coreos.com> (@jonboulle)
|
||||
Brian Waldon <brian.waldon@coreos.com> (@bcwaldon)
|
5
NOTICE
5
NOTICE
@ -1,5 +0,0 @@
|
||||
CoreOS Project
|
||||
Copyright 2014 CoreOS, Inc
|
||||
|
||||
This product includes software developed at CoreOS, Inc.
|
||||
(http://www.coreos.com/).
|
87
README.md
87
README.md
@ -1,86 +1,9 @@
|
||||
# coreos-cloudinit [![Build Status](https://travis-ci.org/coreos/coreos-cloudinit.png?branch=master)](https://travis-ci.org/coreos/coreos-cloudinit)
|
||||
# coreos-cloudinit
|
||||
|
||||
coreos-cloudinit enables a user to customize CoreOS machines by providing either a cloud-config document or an executable script through user-data.
|
||||
coreos-cloudinit allows a user to customize CoreOS machines by providing either an executable script or a cloud-config document as instance user-data. See below to learn how to use these features.
|
||||
|
||||
## Configuration with cloud-config
|
||||
## Supported Cloud-Config Features
|
||||
|
||||
A subset of the [official cloud-config spec][official-cloud-config] is implemented by coreos-cloudinit.
|
||||
Additionally, several [CoreOS-specific options][custom-cloud-config] have been implemented to support interacting with unit files, bootstrapping etcd clusters, and more.
|
||||
All supported cloud-config parameters are [documented here][all-cloud-config].
|
||||
|
||||
[official-cloud-config]: http://cloudinit.readthedocs.org/en/latest/topics/format.html#cloud-config-data
|
||||
[custom-cloud-config]: https://github.com/coreos/coreos-cloudinit/blob/master/Documentation/cloud-config.md#coreos-parameters
|
||||
[all-cloud-config]: https://github.com/coreos/coreos-cloudinit/tree/master/Documentation/cloud-config.md
|
||||
|
||||
The following is an example cloud-config document:
|
||||
|
||||
```
|
||||
#cloud-config
|
||||
|
||||
coreos:
|
||||
units:
|
||||
- name: etcd.service
|
||||
command: start
|
||||
|
||||
users:
|
||||
- name: core
|
||||
passwd: $1$allJZawX$00S5T756I5PGdQga5qhqv1
|
||||
|
||||
write_files:
|
||||
- path: /etc/resolv.conf
|
||||
content: |
|
||||
nameserver 192.0.2.2
|
||||
nameserver 192.0.2.3
|
||||
```
|
||||
|
||||
## Executing a Script
|
||||
|
||||
coreos-cloudinit supports executing user-data as a script instead of parsing it as a cloud-config document.
|
||||
Make sure the first line of your user-data is a shebang and coreos-cloudinit will attempt to execute it:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
echo 'Hello, world!'
|
||||
```
|
||||
|
||||
## user-data Field Substitution
|
||||
|
||||
coreos-cloudinit will replace the following set of tokens in your user-data with system-generated values.
|
||||
|
||||
| Token | Description |
|
||||
| ------------- | ----------- |
|
||||
| $public_ipv4 | Public IPv4 address of machine |
|
||||
| $private_ipv4 | Private IPv4 address of machine |
|
||||
|
||||
These values are determined by CoreOS based on the given provider on which your machine is running.
|
||||
Read more about provider-specific functionality in the [CoreOS OEM documentation][oem-doc].
|
||||
|
||||
[oem-doc]: https://coreos.com/docs/sdk-distributors/distributors/notes-for-distributors/
|
||||
|
||||
For example, submitting the following user-data...
|
||||
|
||||
```
|
||||
#cloud-config
|
||||
coreos:
|
||||
etcd:
|
||||
addr: $public_ipv4:4001
|
||||
peer-addr: $private_ipv4:7001
|
||||
```
|
||||
|
||||
...will result in this cloud-config document being executed:
|
||||
|
||||
```
|
||||
#cloud-config
|
||||
coreos:
|
||||
etcd:
|
||||
addr: 203.0.113.29:4001
|
||||
peer-addr: 192.0.2.13:7001
|
||||
```
|
||||
|
||||
## Bugs
|
||||
|
||||
Please use the [CoreOS issue tracker][bugs] to report all bugs, issues, and feature requests.
|
||||
|
||||
[bugs]: https://github.com/coreos/bugs/issues/new?labels=component/cloud-init
|
||||
Only a subset of [cloud-config functionality][cloud-config] is implemented. A set of custom parameters were added to the cloud-config format that are specific to CoreOS, which are [documented here](https://github.com/coreos/coreos-cloudinit/tree/master/Documentation/cloud-config.md).
|
||||
|
||||
[cloud-config]: http://cloudinit.readthedocs.org/en/latest/topics/format.html#cloud-config-data
|
||||
|
37
build
37
build
@ -1,37 +1,6 @@
|
||||
#!/bin/bash -x
|
||||
#!/bin/bash -e
|
||||
|
||||
ORG_PATH="github.com/coreos"
|
||||
REPO_PATH="${ORG_PATH}/coreos-cloudinit"
|
||||
VERSION=$(git describe --tags)
|
||||
GLDFLAGS="-X main.version=${VERSION}"
|
||||
|
||||
rm -rf bin tmp
|
||||
|
||||
export GO15VENDOREXPERIMENT=1
|
||||
export GOBIN=${PWD}/bin
|
||||
export GOPATH=${PWD}/gopath
|
||||
mkdir -p $GOBIN
|
||||
mkdir -p $GOPATH
|
||||
mkdir -p bin tmp
|
||||
export GOPATH=${PWD}
|
||||
|
||||
which go 2>/dev/null
|
||||
|
||||
if [ "x$?" != "x0" ]; then
|
||||
export GOROOT=$(pwd)/goroot
|
||||
export PATH=$GOROOT/bin:$PATH
|
||||
mkdir -p $GOROOT
|
||||
wget https://storage.googleapis.com/golang/go1.5.linux-amd64.tar.gz -O tmp/go.tar.gz
|
||||
tar --strip-components=1 -C $GOROOT -xf tmp/go.tar.gz
|
||||
fi
|
||||
|
||||
if [ ! -h $GOPATH/src/${REPO_PATH} ]; then
|
||||
mkdir -p $GOPATH/src/${ORG_PATH}
|
||||
ln -s ../../../.. $GOPATH/src/${REPO_PATH} || echo "exit 255"
|
||||
fi
|
||||
|
||||
set -e
|
||||
|
||||
for os in linux freebsd netbsd openbsd windows; do
|
||||
GOOS=${os} go build -x -ldflags "${GLDFLAGS}" -tags netgo -o bin/cloudinit-${os}-x86_64 ${REPO_PATH}
|
||||
GOOS=${os} GOARCH=386 go build -x -ldflags "${GLDFLAGS}" -tags netgo -o bin/cloudinit-${os}-x86_32 ${REPO_PATH}
|
||||
done
|
||||
go build -o bin/coreos-cloudinit github.com/coreos/coreos-cloudinit
|
||||
|
143
cloudinit/cloud_config.go
Normal file
143
cloudinit/cloud_config.go
Normal file
@ -0,0 +1,143 @@
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/third_party/launchpad.net/goyaml"
|
||||
)
|
||||
|
||||
const DefaultSSHKeyName = "coreos-cloudinit"
|
||||
|
||||
type CloudConfig struct {
|
||||
SSHAuthorizedKeys []string `yaml:"ssh_authorized_keys"`
|
||||
Coreos struct {
|
||||
Etcd struct{ Discovery_URL string }
|
||||
Fleet struct{ Autostart bool }
|
||||
Units []Unit
|
||||
}
|
||||
WriteFiles []WriteFile `yaml:"write_files"`
|
||||
Hostname string
|
||||
Users []User
|
||||
}
|
||||
|
||||
func NewCloudConfig(contents []byte) (*CloudConfig, error) {
|
||||
var cfg CloudConfig
|
||||
err := goyaml.Unmarshal(contents, &cfg)
|
||||
return &cfg, err
|
||||
}
|
||||
|
||||
func (cc CloudConfig) String() string {
|
||||
bytes, err := goyaml.Marshal(cc)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
stringified := string(bytes)
|
||||
stringified = fmt.Sprintf("#cloud-config\n%s", stringified)
|
||||
|
||||
return stringified
|
||||
}
|
||||
|
||||
func ApplyCloudConfig(cfg CloudConfig, sshKeyName string) error {
|
||||
if cfg.Hostname != "" {
|
||||
if err := SetHostname(cfg.Hostname); err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("Set hostname to %s", cfg.Hostname)
|
||||
}
|
||||
|
||||
if len(cfg.Users) > 0 {
|
||||
for _, user := range cfg.Users {
|
||||
if user.Name == "" {
|
||||
log.Printf("User object has no 'name' field, skipping")
|
||||
continue
|
||||
}
|
||||
|
||||
if UserExists(&user) {
|
||||
log.Printf("User '%s' exists, ignoring creation-time fields", user.Name)
|
||||
if user.PasswordHash != "" {
|
||||
log.Printf("Setting '%s' user's password", user.Name)
|
||||
if err := SetUserPassword(user.Name, user.PasswordHash); err != nil {
|
||||
log.Printf("Failed setting '%s' user's password: %v", user.Name, err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
} else {
|
||||
log.Printf("Creating user '%s'", user.Name)
|
||||
if err := CreateUser(&user); err != nil {
|
||||
log.Printf("Failed creating user '%s': %v", user.Name, err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if len(user.SSHAuthorizedKeys) > 0 {
|
||||
log.Printf("Authorizing %d SSH keys for user '%s'", len(user.SSHAuthorizedKeys), user.Name)
|
||||
if err := AuthorizeSSHKeys(user.Name, sshKeyName, user.SSHAuthorizedKeys); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(cfg.SSHAuthorizedKeys) > 0 {
|
||||
err := AuthorizeSSHKeys("core", sshKeyName, cfg.SSHAuthorizedKeys)
|
||||
if err == nil {
|
||||
log.Printf("Authorized SSH keys for core user")
|
||||
} else {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if len(cfg.WriteFiles) > 0 {
|
||||
for _, file := range cfg.WriteFiles {
|
||||
if err := ProcessWriteFile("/", &file); err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("Wrote file %s to filesystem", file.Path)
|
||||
}
|
||||
}
|
||||
|
||||
if cfg.Coreos.Etcd.Discovery_URL != "" {
|
||||
err := PersistEtcdDiscoveryURL(cfg.Coreos.Etcd.Discovery_URL)
|
||||
if err == nil {
|
||||
log.Printf("Consumed etcd discovery url")
|
||||
} else {
|
||||
log.Fatalf("Failed to persist etcd discovery url to filesystem: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
if len(cfg.Coreos.Units) > 0 {
|
||||
for _, unit := range cfg.Coreos.Units {
|
||||
log.Printf("Placing unit %s on filesystem", unit.Name)
|
||||
dst, err := PlaceUnit("/", &unit)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("Placed unit %s at %s", unit.Name, dst)
|
||||
|
||||
if unit.Group() != "network" {
|
||||
log.Printf("Enabling unit file %s", dst)
|
||||
if err := EnableUnitFile(dst, unit.Runtime); err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("Enabled unit %s", unit.Name)
|
||||
} else {
|
||||
log.Printf("Skipping enable for network-like unit %s", unit.Name)
|
||||
}
|
||||
}
|
||||
DaemonReload()
|
||||
StartUnits(cfg.Coreos.Units)
|
||||
}
|
||||
|
||||
if cfg.Coreos.Fleet.Autostart {
|
||||
err := StartUnitByName("fleet.service")
|
||||
if err == nil {
|
||||
log.Printf("Started fleet service.")
|
||||
} else {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
252
cloudinit/cloud_config_test.go
Normal file
252
cloudinit/cloud_config_test.go
Normal file
@ -0,0 +1,252 @@
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// Assert that the parsing of a cloud config file "generally works"
|
||||
func TestCloudConfigEmpty(t *testing.T) {
|
||||
cfg, err := NewCloudConfig([]byte{})
|
||||
if err != nil {
|
||||
t.Fatalf("Encountered unexpected error :%v", err)
|
||||
}
|
||||
|
||||
keys := cfg.SSHAuthorizedKeys
|
||||
if len(keys) != 0 {
|
||||
t.Error("Parsed incorrect number of SSH keys")
|
||||
}
|
||||
|
||||
if cfg.Coreos.Etcd.Discovery_URL != "" {
|
||||
t.Error("Parsed incorrect value of discovery url")
|
||||
}
|
||||
|
||||
if cfg.Coreos.Fleet.Autostart {
|
||||
t.Error("Expected AutostartFleet not to be defined")
|
||||
}
|
||||
|
||||
if len(cfg.WriteFiles) != 0 {
|
||||
t.Error("Expected zero WriteFiles")
|
||||
}
|
||||
|
||||
if cfg.Hostname != "" {
|
||||
t.Errorf("Expected hostname to be empty, got '%s'", cfg.Hostname)
|
||||
}
|
||||
}
|
||||
|
||||
// Assert that the parsing of a cloud config file "generally works"
|
||||
func TestCloudConfig(t *testing.T) {
|
||||
contents := []byte(`
|
||||
coreos:
|
||||
etcd:
|
||||
discovery_url: "https://discovery.etcd.io/827c73219eeb2fa5530027c37bf18877"
|
||||
fleet:
|
||||
autostart: Yes
|
||||
units:
|
||||
- name: 50-eth0.network
|
||||
runtime: yes
|
||||
content: '[Match]
|
||||
|
||||
Name=eth47
|
||||
|
||||
|
||||
[Network]
|
||||
|
||||
Address=10.209.171.177/19
|
||||
|
||||
'
|
||||
ssh_authorized_keys:
|
||||
- foobar
|
||||
- foobaz
|
||||
write_files:
|
||||
- content: |
|
||||
penny
|
||||
elroy
|
||||
path: /etc/dogepack.conf
|
||||
permissions: '0644'
|
||||
owner: root:dogepack
|
||||
hostname: trontastic
|
||||
`)
|
||||
cfg, err := NewCloudConfig(contents)
|
||||
if err != nil {
|
||||
t.Fatalf("Encountered unexpected error :%v", err)
|
||||
}
|
||||
|
||||
keys := cfg.SSHAuthorizedKeys
|
||||
if len(keys) != 2 {
|
||||
t.Error("Parsed incorrect number of SSH keys")
|
||||
} else if keys[0] != "foobar" {
|
||||
t.Error("Expected first SSH key to be 'foobar'")
|
||||
} else if keys[1] != "foobaz" {
|
||||
t.Error("Expected first SSH key to be 'foobaz'")
|
||||
}
|
||||
|
||||
if cfg.Coreos.Etcd.Discovery_URL != "https://discovery.etcd.io/827c73219eeb2fa5530027c37bf18877" {
|
||||
t.Error("Failed to parse etcd discovery url")
|
||||
}
|
||||
|
||||
if !cfg.Coreos.Fleet.Autostart {
|
||||
t.Error("Expected AutostartFleet to be true")
|
||||
}
|
||||
|
||||
if len(cfg.WriteFiles) != 1 {
|
||||
t.Error("Failed to parse correct number of write_files")
|
||||
} else {
|
||||
wf := cfg.WriteFiles[0]
|
||||
if wf.Content != "penny\nelroy\n" {
|
||||
t.Errorf("WriteFile has incorrect contents '%s'", wf.Content)
|
||||
}
|
||||
if wf.Encoding != "" {
|
||||
t.Errorf("WriteFile has incorrect encoding %s", wf.Encoding)
|
||||
}
|
||||
if wf.Permissions != "0644" {
|
||||
t.Errorf("WriteFile has incorrect permissions %s", wf.Permissions)
|
||||
}
|
||||
if wf.Path != "/etc/dogepack.conf" {
|
||||
t.Errorf("WriteFile has incorrect path %s", wf.Path)
|
||||
}
|
||||
if wf.Owner != "root:dogepack" {
|
||||
t.Errorf("WriteFile has incorrect owner %s", wf.Owner)
|
||||
}
|
||||
}
|
||||
|
||||
if len(cfg.Coreos.Units) != 1 {
|
||||
t.Error("Failed to parse correct number of units")
|
||||
} else {
|
||||
u := cfg.Coreos.Units[0]
|
||||
expect := `[Match]
|
||||
Name=eth47
|
||||
|
||||
[Network]
|
||||
Address=10.209.171.177/19
|
||||
`
|
||||
if u.Content != expect {
|
||||
t.Errorf("Unit has incorrect contents '%s'.\nExpected '%s'.", u.Content, expect)
|
||||
}
|
||||
if u.Runtime != true {
|
||||
t.Errorf("Unit has incorrect runtime value")
|
||||
}
|
||||
if u.Name != "50-eth0.network" {
|
||||
t.Errorf("Unit has incorrect name %s", u.Name)
|
||||
}
|
||||
if u.Type() != "network" {
|
||||
t.Errorf("Unit has incorrect type '%s'", u.Type())
|
||||
}
|
||||
}
|
||||
|
||||
if cfg.Hostname != "trontastic" {
|
||||
t.Errorf("Failed to parse hostname")
|
||||
}
|
||||
}
|
||||
|
||||
// Assert that our interface conversion doesn't panic
|
||||
func TestCloudConfigKeysNotList(t *testing.T) {
|
||||
contents := []byte(`
|
||||
ssh_authorized_keys:
|
||||
- foo: bar
|
||||
`)
|
||||
cfg, err := NewCloudConfig(contents)
|
||||
if err != nil {
|
||||
t.Fatalf("Encountered unexpected error :%v", err)
|
||||
}
|
||||
|
||||
keys := cfg.SSHAuthorizedKeys
|
||||
if len(keys) != 0 {
|
||||
t.Error("Parsed incorrect number of SSH keys")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCloudConfigSerializationHeader(t *testing.T) {
|
||||
cfg, _ := NewCloudConfig([]byte{})
|
||||
contents := cfg.String()
|
||||
header := strings.SplitN(contents, "\n", 2)[0]
|
||||
if header != "#cloud-config" {
|
||||
t.Fatalf("Serialized config did not have expected header")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCloudConfigUsers(t *testing.T) {
|
||||
contents := []byte(`
|
||||
users:
|
||||
- name: elroy
|
||||
passwd: somehash
|
||||
ssh-authorized-keys:
|
||||
- somekey
|
||||
gecos: arbitrary comment
|
||||
homedir: /home/place
|
||||
no-create-home: yes
|
||||
primary-group: things
|
||||
groups:
|
||||
- ping
|
||||
- pong
|
||||
no-user-group: true
|
||||
system: y
|
||||
no-log-init: True
|
||||
`)
|
||||
cfg, err := NewCloudConfig(contents)
|
||||
if err != nil {
|
||||
t.Fatalf("Encountered unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if len(cfg.Users) != 1 {
|
||||
t.Fatalf("Parsed %d users, expected 1", cfg.Users)
|
||||
}
|
||||
|
||||
user := cfg.Users[0]
|
||||
|
||||
if user.Name != "elroy" {
|
||||
t.Errorf("User name is %q, expected 'elroy'", user.Name)
|
||||
}
|
||||
|
||||
if user.PasswordHash != "somehash" {
|
||||
t.Errorf("User passwd is %q, expected 'somehash'", user.PasswordHash)
|
||||
}
|
||||
|
||||
if keys := user.SSHAuthorizedKeys; len(keys) != 1 {
|
||||
t.Errorf("Parsed %d ssh keys, expected 1", len(keys))
|
||||
} else {
|
||||
key := user.SSHAuthorizedKeys[0]
|
||||
if key != "somekey" {
|
||||
t.Errorf("User SSH key is %q, expected 'somekey'", key)
|
||||
}
|
||||
}
|
||||
|
||||
if user.GECOS != "arbitrary comment" {
|
||||
t.Errorf("Failed to parse gecos field, got %q", user.GECOS)
|
||||
}
|
||||
|
||||
if user.Homedir != "/home/place" {
|
||||
t.Errorf("Failed to parse homedir field, got %q", user.Homedir)
|
||||
}
|
||||
|
||||
if !user.NoCreateHome {
|
||||
t.Errorf("Failed to parse no-create-home field")
|
||||
}
|
||||
|
||||
if user.PrimaryGroup != "things"{
|
||||
t.Errorf("Failed to parse primary-group field, got %q", user.PrimaryGroup)
|
||||
}
|
||||
|
||||
if len(user.Groups) != 2 {
|
||||
t.Errorf("Failed to parse 2 goups, got %d", len(user.Groups))
|
||||
} else {
|
||||
if user.Groups[0] != "ping" {
|
||||
t.Errorf("First group was %q, not expected value 'ping'", user.Groups[0])
|
||||
}
|
||||
if user.Groups[1] != "pong" {
|
||||
t.Errorf("First group was %q, not expected value 'pong'", user.Groups[1])
|
||||
}
|
||||
}
|
||||
|
||||
if !user.NoUserGroup {
|
||||
t.Errorf("Failed to parse no-user-group field")
|
||||
}
|
||||
|
||||
if !user.System {
|
||||
t.Errorf("Failed to parse system field")
|
||||
}
|
||||
|
||||
if !user.NoLogInit {
|
||||
t.Errorf("Failed to parse no-log-init field")
|
||||
}
|
||||
}
|
25
cloudinit/etcd.go
Normal file
25
cloudinit/etcd.go
Normal file
@ -0,0 +1,25 @@
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"os"
|
||||
"path"
|
||||
)
|
||||
|
||||
const (
|
||||
etcdDiscoveryPath = "/var/run/etcd/bootstrap.disco"
|
||||
)
|
||||
|
||||
func PersistEtcdDiscoveryURL(url string) error {
|
||||
dir := path.Dir(etcdDiscoveryPath)
|
||||
if _, err := os.Stat(dir); err != nil {
|
||||
log.Printf("Creating directory /var/run/etcd")
|
||||
err := os.MkdirAll(dir, os.FileMode(0644))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return ioutil.WriteFile(etcdDiscoveryPath, []byte(url), os.FileMode(0644))
|
||||
}
|
36
cloudinit/metadata_service.go
Normal file
36
cloudinit/metadata_service.go
Normal file
@ -0,0 +1,36 @@
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
)
|
||||
|
||||
type metadataService struct {
|
||||
url string
|
||||
client http.Client
|
||||
}
|
||||
|
||||
func NewMetadataService(url string) *metadataService {
|
||||
return &metadataService{url, http.Client{}}
|
||||
}
|
||||
|
||||
func (ms *metadataService) UserData() ([]byte, error) {
|
||||
resp, err := ms.client.Get(ms.url)
|
||||
if err != nil {
|
||||
return []byte{}, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode / 100 != 2 {
|
||||
return []byte{}, nil
|
||||
}
|
||||
|
||||
respBytes, err := ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return respBytes, nil
|
||||
}
|
||||
|
||||
|
59
cloudinit/ssh_key.go
Normal file
59
cloudinit/ssh_key.go
Normal file
@ -0,0 +1,59 @@
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os/exec"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Add the provide SSH public key to the core user's list of
|
||||
// authorized keys
|
||||
func AuthorizeSSHKeys(user string, keysName string, keys []string) error {
|
||||
for i, key := range keys {
|
||||
keys[i] = strings.TrimSpace(key)
|
||||
}
|
||||
|
||||
// join all keys with newlines, ensuring the resulting string
|
||||
// also ends with a newline
|
||||
joined := fmt.Sprintf("%s\n", strings.Join(keys, "\n"))
|
||||
|
||||
cmd := exec.Command("update-ssh-keys", "-u", user, "-a", keysName)
|
||||
stdin, err := cmd.StdinPipe()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
stdout, err := cmd.StdoutPipe()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
stderr, err := cmd.StderrPipe()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = cmd.Start()
|
||||
if err != nil {
|
||||
stdin.Close()
|
||||
return err
|
||||
}
|
||||
|
||||
_, err = io.WriteString(stdin, joined)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
stdin.Close()
|
||||
stdoutBytes, _ := ioutil.ReadAll(stdout)
|
||||
stderrBytes, _ := ioutil.ReadAll(stderr)
|
||||
|
||||
err = cmd.Wait()
|
||||
if err != nil {
|
||||
return fmt.Errorf("Call to update-ssh-keys failed with %v: %s %s", err, string(stdoutBytes), string(stderrBytes))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
157
cloudinit/systemd.go
Normal file
157
cloudinit/systemd.go
Normal file
@ -0,0 +1,157 @@
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/third_party/github.com/coreos/go-systemd/dbus"
|
||||
)
|
||||
|
||||
type Unit struct {
|
||||
Name string
|
||||
Runtime bool
|
||||
Content string
|
||||
}
|
||||
|
||||
func (u *Unit) Type() string {
|
||||
ext := filepath.Ext(u.Name)
|
||||
return strings.TrimLeft(ext, ".")
|
||||
}
|
||||
|
||||
func (u *Unit) Group() (group string) {
|
||||
t := u.Type()
|
||||
if t == "network" || t == "netdev" || t == "link" {
|
||||
group = "network"
|
||||
} else {
|
||||
group = "system"
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
type Script []byte
|
||||
|
||||
func PlaceUnit(root string, u *Unit) (string, error) {
|
||||
dir := "etc"
|
||||
if u.Runtime {
|
||||
dir = "run"
|
||||
}
|
||||
|
||||
dst := path.Join(root, dir, "systemd", u.Group())
|
||||
if _, err := os.Stat(dst); os.IsNotExist(err) {
|
||||
if err := os.MkdirAll(dst, os.FileMode(0755)); err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
|
||||
dst = path.Join(dst, u.Name)
|
||||
err := ioutil.WriteFile(dst, []byte(u.Content), os.FileMode(0644))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return dst, nil
|
||||
}
|
||||
|
||||
func EnableUnitFile(file string, runtime bool) error {
|
||||
conn, err := dbus.New()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
files := []string{file}
|
||||
_, _, err = conn.EnableUnitFiles(files, runtime, true)
|
||||
return err
|
||||
}
|
||||
|
||||
func separateNetworkUnits(units []Unit) ([]Unit, []Unit) {
|
||||
networkUnits := make([]Unit, 0)
|
||||
nonNetworkUnits := make([]Unit, 0)
|
||||
for _, unit := range units {
|
||||
if unit.Group() == "network" {
|
||||
networkUnits = append(networkUnits, unit)
|
||||
} else {
|
||||
nonNetworkUnits = append(nonNetworkUnits, unit)
|
||||
}
|
||||
}
|
||||
return networkUnits, nonNetworkUnits
|
||||
}
|
||||
|
||||
func StartUnits(units []Unit) error {
|
||||
networkUnits, nonNetworkUnits := separateNetworkUnits(units)
|
||||
if len(networkUnits) > 0 {
|
||||
if err := RestartUnitByName("systemd-networkd.service"); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
for _, unit := range nonNetworkUnits {
|
||||
if err := RestartUnitByName(unit.Name); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func DaemonReload() error {
|
||||
conn, err := dbus.New()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
_, err = conn.Reload()
|
||||
return err
|
||||
}
|
||||
|
||||
func RestartUnitByName(name string) error {
|
||||
log.Printf("Restarting unit %s", name)
|
||||
conn, err := dbus.New()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
output, err := conn.RestartUnit(name, "replace")
|
||||
log.Printf("Restart completed with '%s'", output)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func StartUnitByName(name string) error {
|
||||
conn, err := dbus.New()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
_, err = conn.StartUnit(name, "replace")
|
||||
return err
|
||||
}
|
||||
|
||||
func ExecuteScript(scriptPath string) (string, error) {
|
||||
props := []dbus.Property{
|
||||
dbus.PropDescription("Unit generated and executed by coreos-cloudinit on behalf of user"),
|
||||
dbus.PropExecStart([]string{"/bin/bash", scriptPath}, false),
|
||||
}
|
||||
|
||||
base := path.Base(scriptPath)
|
||||
name := fmt.Sprintf("coreos-cloudinit-%s.service", base)
|
||||
|
||||
log.Printf("Creating transient systemd unit '%s'", name)
|
||||
|
||||
conn, err := dbus.New()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
_, err = conn.StartTransientUnit(name, "replace", props...)
|
||||
return name, err
|
||||
}
|
||||
|
||||
func SetHostname(hostname string) error {
|
||||
return exec.Command("hostnamectl", "set-hostname", hostname).Run()
|
||||
}
|
102
cloudinit/systemd_test.go
Normal file
102
cloudinit/systemd_test.go
Normal file
@ -0,0 +1,102 @@
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
"syscall"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestPlaceNetworkUnit(t *testing.T) {
|
||||
u := Unit{
|
||||
Name: "50-eth0.network",
|
||||
Runtime: true,
|
||||
Content: `[Match]
|
||||
Name=eth47
|
||||
|
||||
[Network]
|
||||
Address=10.209.171.177/19
|
||||
`,
|
||||
}
|
||||
|
||||
dir, err := ioutil.TempDir(os.TempDir(), "coreos-cloudinit-")
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to create tempdir: %v", err)
|
||||
}
|
||||
defer syscall.Rmdir(dir)
|
||||
|
||||
if _, err := PlaceUnit(dir, &u); err != nil {
|
||||
t.Fatalf("PlaceUnit failed: %v", err)
|
||||
}
|
||||
|
||||
fullPath := path.Join(dir, "run", "systemd", "network", "50-eth0.network")
|
||||
fi, err := os.Stat(fullPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to stat file: %v", err)
|
||||
}
|
||||
|
||||
if fi.Mode() != os.FileMode(0644) {
|
||||
t.Errorf("File has incorrect mode: %v", fi.Mode())
|
||||
}
|
||||
|
||||
contents, err := ioutil.ReadFile(fullPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to read expected file: %v", err)
|
||||
}
|
||||
|
||||
expect := `[Match]
|
||||
Name=eth47
|
||||
|
||||
[Network]
|
||||
Address=10.209.171.177/19
|
||||
`
|
||||
if string(contents) != expect {
|
||||
t.Fatalf("File has incorrect contents '%s'.\nExpected '%s'", string(contents), expect)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPlaceMountUnit(t *testing.T) {
|
||||
u := Unit{
|
||||
Name: "media-state.mount",
|
||||
Runtime: false,
|
||||
Content: `[Mount]
|
||||
What=/dev/sdb1
|
||||
Where=/media/state
|
||||
`,
|
||||
}
|
||||
|
||||
dir, err := ioutil.TempDir(os.TempDir(), "coreos-cloudinit-")
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to create tempdir: %v", err)
|
||||
}
|
||||
defer syscall.Rmdir(dir)
|
||||
|
||||
if _, err := PlaceUnit(dir, &u); err != nil {
|
||||
t.Fatalf("PlaceUnit failed: %v", err)
|
||||
}
|
||||
|
||||
fullPath := path.Join(dir, "etc", "systemd", "system", "media-state.mount")
|
||||
fi, err := os.Stat(fullPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to stat file: %v", err)
|
||||
}
|
||||
|
||||
if fi.Mode() != os.FileMode(0644) {
|
||||
t.Errorf("File has incorrect mode: %v", fi.Mode())
|
||||
}
|
||||
|
||||
contents, err := ioutil.ReadFile(fullPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to read expected file: %v", err)
|
||||
}
|
||||
|
||||
expect := `[Mount]
|
||||
What=/dev/sdb1
|
||||
Where=/media/state
|
||||
`
|
||||
if string(contents) != expect {
|
||||
t.Fatalf("File has incorrect contents '%s'.\nExpected '%s'", string(contents), expect)
|
||||
}
|
||||
}
|
||||
|
106
cloudinit/user.go
Normal file
106
cloudinit/user.go
Normal file
@ -0,0 +1,106 @@
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"os/exec"
|
||||
"os/user"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type User struct {
|
||||
Name string `yaml:"name"`
|
||||
PasswordHash string `yaml:"passwd"`
|
||||
SSHAuthorizedKeys []string `yaml:"ssh-authorized-keys"`
|
||||
GECOS string `yaml:"gecos"`
|
||||
Homedir string `yaml:"homedir"`
|
||||
NoCreateHome bool `yaml:"no-create-home"`
|
||||
PrimaryGroup string `yaml:"primary-group"`
|
||||
Groups []string `yaml:"groups"`
|
||||
NoUserGroup bool `yaml:"no-user-group"`
|
||||
System bool `yaml:"system"`
|
||||
NoLogInit bool `yaml:"no-log-init"`
|
||||
}
|
||||
|
||||
func UserExists(u *User) bool {
|
||||
_, err := user.Lookup(u.Name)
|
||||
return err == nil
|
||||
}
|
||||
|
||||
func CreateUser(u *User) error {
|
||||
args := []string{}
|
||||
|
||||
if u.PasswordHash != "" {
|
||||
args = append(args, "--password", u.PasswordHash)
|
||||
}
|
||||
|
||||
if u.GECOS != "" {
|
||||
args = append(args, "--comment", fmt.Sprintf("%q", u.GECOS))
|
||||
}
|
||||
|
||||
if u.Homedir != "" {
|
||||
args = append(args, "--home-dir", u.Homedir)
|
||||
}
|
||||
|
||||
if u.NoCreateHome {
|
||||
args = append(args, "--no-create-home")
|
||||
} else {
|
||||
args = append(args, "--create-home")
|
||||
}
|
||||
|
||||
if u.PrimaryGroup != "" {
|
||||
args = append(args, "--primary-group", u.PrimaryGroup)
|
||||
}
|
||||
|
||||
if len(u.Groups) > 0 {
|
||||
args = append(args, "--groups", strings.Join(u.Groups, ","))
|
||||
}
|
||||
|
||||
if u.NoUserGroup {
|
||||
args = append(args, "--no-user-group")
|
||||
}
|
||||
|
||||
if u.System {
|
||||
args = append(args, "--system")
|
||||
}
|
||||
|
||||
if u.NoLogInit {
|
||||
args = append(args, "--no-log-init")
|
||||
}
|
||||
|
||||
args = append(args, u.Name)
|
||||
|
||||
output, err := exec.Command("useradd", args...).CombinedOutput()
|
||||
if err != nil {
|
||||
log.Printf("Command 'useradd %s' failed: %v\n%s", strings.Join(args, " "), err, output)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func SetUserPassword(user, hash string) error {
|
||||
cmd := exec.Command("/usr/sbin/chpasswd", "-e")
|
||||
|
||||
stdin, err := cmd.StdinPipe()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = cmd.Start()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
arg := fmt.Sprintf("%s:%s", user, hash)
|
||||
_, err = stdin.Write([]byte(arg))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
stdin.Close()
|
||||
|
||||
err = cmd.Wait()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
30
cloudinit/user_data.go
Normal file
30
cloudinit/user_data.go
Normal file
@ -0,0 +1,30 @@
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"fmt"
|
||||
"log"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func ParseUserData(contents []byte) (interface{}, error) {
|
||||
bytereader := bytes.NewReader(contents)
|
||||
bufreader := bufio.NewReader(bytereader)
|
||||
header, _ := bufreader.ReadString('\n')
|
||||
|
||||
if strings.HasPrefix(header, "#!") {
|
||||
log.Printf("Parsing user-data as script")
|
||||
return Script(contents), nil
|
||||
|
||||
} else if header == "#cloud-config\n" {
|
||||
log.Printf("Parsing user-data as cloud-config")
|
||||
cfg, err := NewCloudConfig(contents)
|
||||
if err != nil {
|
||||
log.Fatal(err.Error())
|
||||
}
|
||||
return *cfg, nil
|
||||
} else {
|
||||
return nil, fmt.Errorf("Unrecognized user-data header: %s", header)
|
||||
}
|
||||
}
|
66
cloudinit/workspace.go
Normal file
66
cloudinit/workspace.go
Normal file
@ -0,0 +1,66 @@
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
)
|
||||
|
||||
func PrepWorkspace(workspace string) error {
|
||||
// Ensure workspace exists and is a directory
|
||||
info, err := os.Stat(workspace)
|
||||
if err == nil {
|
||||
if !info.IsDir() {
|
||||
return fmt.Errorf("%s is not a directory", workspace)
|
||||
}
|
||||
} else {
|
||||
err = os.MkdirAll(workspace, 0755)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure scripts dir in workspace exists and is a directory
|
||||
scripts := path.Join(workspace, "scripts")
|
||||
info, err = os.Stat(scripts)
|
||||
if err == nil {
|
||||
if !info.IsDir() {
|
||||
return fmt.Errorf("%s is not a directory", scripts)
|
||||
}
|
||||
} else {
|
||||
err = os.Mkdir(scripts, 0755)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func PersistScriptInWorkspace(script Script, workspace string) (string, error) {
|
||||
scriptsDir := path.Join(workspace, "scripts")
|
||||
f, err := ioutil.TempFile(scriptsDir, "")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
f.Chmod(0744)
|
||||
|
||||
_, err = f.Write(script)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// Ensure script has been written to disk before returning, as the
|
||||
// next natural thing to do is execute it
|
||||
f.Sync()
|
||||
|
||||
return f.Name(), nil
|
||||
}
|
||||
|
||||
func PersistScriptUnitNameInWorkspace(name string, workspace string) error {
|
||||
unitPath := path.Join(workspace, "scripts", "unit-name")
|
||||
return ioutil.WriteFile(unitPath, []byte(name), 0644)
|
||||
}
|
46
cloudinit/write_file.go
Normal file
46
cloudinit/write_file.go
Normal file
@ -0,0 +1,46 @@
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
type WriteFile struct {
|
||||
Encoding string
|
||||
Content string
|
||||
Owner string
|
||||
Path string
|
||||
Permissions string
|
||||
}
|
||||
|
||||
func ProcessWriteFile(base string, wf *WriteFile) error {
|
||||
fullPath := path.Join(base, wf.Path)
|
||||
|
||||
if err := os.MkdirAll(path.Dir(fullPath), os.FileMode(0744)); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Parse string representation of file mode as octal
|
||||
perm, err := strconv.ParseInt(wf.Permissions, 8, 32)
|
||||
if err != nil {
|
||||
return errors.New("Unable to parse file permissions as octal integer")
|
||||
}
|
||||
|
||||
if err := ioutil.WriteFile(fullPath, []byte(wf.Content), os.FileMode(perm)); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if wf.Owner != "" {
|
||||
// We shell out since we don't have a way to look up unix groups natively
|
||||
cmd := exec.Command("chown", wf.Owner, fullPath)
|
||||
if err := cmd.Run(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
81
cloudinit/write_file_test.go
Normal file
81
cloudinit/write_file_test.go
Normal file
@ -0,0 +1,81 @@
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
"syscall"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestWriteFileUnencodedContent(t *testing.T) {
|
||||
wf := WriteFile{
|
||||
Path: "/tmp/foo",
|
||||
Content: "bar",
|
||||
Permissions: "0644",
|
||||
}
|
||||
dir, err := ioutil.TempDir(os.TempDir(), "coreos-cloudinit-")
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to create tempdir: %v", err)
|
||||
}
|
||||
defer syscall.Rmdir(dir)
|
||||
|
||||
if err := ProcessWriteFile(dir, &wf); err != nil {
|
||||
t.Fatalf("Processing of WriteFile failed: %v", err)
|
||||
}
|
||||
|
||||
fullPath := path.Join(dir, "tmp", "foo")
|
||||
|
||||
fi, err := os.Stat(fullPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to stat file: %v", err)
|
||||
}
|
||||
|
||||
if fi.Mode() != os.FileMode(0644) {
|
||||
t.Errorf("File has incorrect mode: %v", fi.Mode())
|
||||
}
|
||||
|
||||
contents, err := ioutil.ReadFile(fullPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to read expected file: %v", err)
|
||||
}
|
||||
|
||||
if string(contents) != "bar" {
|
||||
t.Fatalf("File has incorrect contents")
|
||||
}
|
||||
}
|
||||
|
||||
func TestWriteFileInvalidPermission(t *testing.T) {
|
||||
wf := WriteFile{
|
||||
Path: "/tmp/foo",
|
||||
Content: "bar",
|
||||
Permissions: "pants",
|
||||
}
|
||||
dir, err := ioutil.TempDir(os.TempDir(), "coreos-cloudinit-")
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to create tempdir: %v", err)
|
||||
}
|
||||
defer syscall.Rmdir(dir)
|
||||
|
||||
if err := ProcessWriteFile(dir, &wf); err == nil {
|
||||
t.Fatalf("Expected error to be raised when writing file with invalid permission")
|
||||
}
|
||||
}
|
||||
|
||||
func TestWriteFileEncodedContent(t *testing.T) {
|
||||
wf := WriteFile{
|
||||
Path: "/tmp/foo",
|
||||
Content: "",
|
||||
Encoding: "base64",
|
||||
}
|
||||
|
||||
dir, err := ioutil.TempDir(os.TempDir(), "coreos-cloudinit-")
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to create tempdir: %v", err)
|
||||
}
|
||||
defer syscall.Rmdir(dir)
|
||||
|
||||
if err := ProcessWriteFile(dir, &wf); err == nil {
|
||||
t.Fatalf("Expected error to be raised when writing file with encoding")
|
||||
}
|
||||
}
|
164
config/config.go
164
config/config.go
@ -1,164 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"regexp"
|
||||
"strings"
|
||||
"unicode"
|
||||
|
||||
yaml "gopkg.in/yaml.v2"
|
||||
)
|
||||
|
||||
// CloudConfig encapsulates the entire cloud-config configuration file and maps
|
||||
// directly to YAML. Fields that cannot be set in the cloud-config (fields
|
||||
// used for internal use) have the YAML tag '-' so that they aren't marshalled.
|
||||
type CloudConfig struct {
|
||||
SSHAuthorizedKeys []string `yaml:"ssh_authorized_keys"`
|
||||
SSHFingerprints bool `yaml:"no_ssh_fingerprints"`
|
||||
Debug bool `yaml:"debug"`
|
||||
RunCMD []string `yaml:"runcmd"`
|
||||
NetworkConfigPath string `yaml:"-"`
|
||||
NetworkConfig string `yaml:"-"`
|
||||
Bootstrap string `yaml:"-"`
|
||||
SystemInfo SystemInfo `yaml:"system_info"`
|
||||
DisableRoot bool `yaml:"disable_root"`
|
||||
SSHPasswdAuth bool `yaml:"ssh_pwauth"`
|
||||
ResizeRootfs bool `yaml:"resize_rootfs"`
|
||||
CoreOS CoreOS `yaml:"coreos"`
|
||||
WriteFiles []File `yaml:"write_files"`
|
||||
Hostname string `yaml:"hostname"`
|
||||
Users []User `yaml:"users"`
|
||||
ManageEtcHosts EtcHosts `yaml:"manage_etc_hosts"`
|
||||
}
|
||||
|
||||
type CoreOS struct {
|
||||
Etcd Etcd `yaml:"etcd"`
|
||||
Etcd2 Etcd2 `yaml:"etcd2"`
|
||||
Flannel Flannel `yaml:"flannel"`
|
||||
Fleet Fleet `yaml:"fleet"`
|
||||
Locksmith Locksmith `yaml:"locksmith"`
|
||||
OEM OEM `yaml:"oem"`
|
||||
Update Update `yaml:"update"`
|
||||
Units []Unit `yaml:"units"`
|
||||
}
|
||||
|
||||
func IsCloudConfig(userdata string) bool {
|
||||
header := strings.SplitN(userdata, "\n", 2)[0]
|
||||
|
||||
// Trim trailing whitespaces
|
||||
header = strings.TrimRightFunc(header, unicode.IsSpace)
|
||||
|
||||
return (header == "#cloud-config")
|
||||
}
|
||||
|
||||
// NewCloudConfig instantiates a new CloudConfig from the given contents (a
|
||||
// string of YAML), returning any error encountered. It will ignore unknown
|
||||
// fields but log encountering them.
|
||||
func NewCloudConfig(contents string) (*CloudConfig, error) {
|
||||
// yaml.UnmarshalMappingKeyTransform = func(nameIn string) (nameOut string) {
|
||||
// return strings.Replace(nameIn, "-", "_", -1)
|
||||
// }
|
||||
var cfg CloudConfig
|
||||
err := yaml.Unmarshal([]byte(contents), &cfg)
|
||||
return &cfg, err
|
||||
}
|
||||
|
||||
func (cc CloudConfig) String() string {
|
||||
bytes, err := yaml.Marshal(cc)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
stringified := string(bytes)
|
||||
stringified = fmt.Sprintf("#cloud-config\n%s", stringified)
|
||||
|
||||
return stringified
|
||||
}
|
||||
|
||||
// IsZero returns whether or not the parameter is the zero value for its type.
|
||||
// If the parameter is a struct, only the exported fields are considered.
|
||||
func IsZero(c interface{}) bool {
|
||||
return isZero(reflect.ValueOf(c))
|
||||
}
|
||||
|
||||
type ErrorValid struct {
|
||||
Value string
|
||||
Valid string
|
||||
Field string
|
||||
}
|
||||
|
||||
func (e ErrorValid) Error() string {
|
||||
return fmt.Sprintf("invalid value %q for option %q (valid options: %q)", e.Value, e.Field, e.Valid)
|
||||
}
|
||||
|
||||
// AssertStructValid checks the fields in the structure and makes sure that
|
||||
// they contain valid values as specified by the 'valid' flag. Empty fields are
|
||||
// implicitly valid.
|
||||
func AssertStructValid(c interface{}) error {
|
||||
ct := reflect.TypeOf(c)
|
||||
cv := reflect.ValueOf(c)
|
||||
for i := 0; i < ct.NumField(); i++ {
|
||||
ft := ct.Field(i)
|
||||
if !isFieldExported(ft) {
|
||||
continue
|
||||
}
|
||||
|
||||
if err := AssertValid(cv.Field(i), ft.Tag.Get("valid")); err != nil {
|
||||
err.Field = ft.Name
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// AssertValid checks to make sure that the given value is in the list of
|
||||
// valid values. Zero values are implicitly valid.
|
||||
func AssertValid(value reflect.Value, valid string) *ErrorValid {
|
||||
if valid == "" || isZero(value) {
|
||||
return nil
|
||||
}
|
||||
|
||||
vs := fmt.Sprintf("%v", value.Interface())
|
||||
if m, _ := regexp.MatchString(valid, vs); m {
|
||||
return nil
|
||||
}
|
||||
|
||||
return &ErrorValid{
|
||||
Value: vs,
|
||||
Valid: valid,
|
||||
}
|
||||
}
|
||||
|
||||
func isZero(v reflect.Value) bool {
|
||||
switch v.Kind() {
|
||||
case reflect.Struct:
|
||||
vt := v.Type()
|
||||
for i := 0; i < v.NumField(); i++ {
|
||||
if isFieldExported(vt.Field(i)) && !isZero(v.Field(i)) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
default:
|
||||
return v.Interface() == reflect.Zero(v.Type()).Interface()
|
||||
}
|
||||
}
|
||||
|
||||
func isFieldExported(f reflect.StructField) bool {
|
||||
return f.PkgPath == ""
|
||||
}
|
@ -1,503 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"regexp"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestNewCloudConfig(t *testing.T) {
|
||||
tests := []struct {
|
||||
contents string
|
||||
|
||||
config CloudConfig
|
||||
}{
|
||||
{},
|
||||
{
|
||||
contents: "#cloud-config\nwrite_files:\n - path: underscore",
|
||||
config: CloudConfig{WriteFiles: []File{File{Path: "underscore"}}},
|
||||
},
|
||||
{
|
||||
contents: "#cloud-config\nwrite-files:\n - path: hyphen",
|
||||
config: CloudConfig{WriteFiles: []File{File{Path: "hyphen"}}},
|
||||
},
|
||||
{
|
||||
contents: "#cloud-config\ncoreos:\n update:\n reboot-strategy: off",
|
||||
config: CloudConfig{CoreOS: CoreOS{Update: Update{RebootStrategy: "off"}}},
|
||||
},
|
||||
{
|
||||
contents: "#cloud-config\ncoreos:\n update:\n reboot-strategy: false",
|
||||
config: CloudConfig{CoreOS: CoreOS{Update: Update{RebootStrategy: "false"}}},
|
||||
},
|
||||
{
|
||||
contents: "#cloud-config\nwrite_files:\n - permissions: 0744",
|
||||
config: CloudConfig{WriteFiles: []File{File{RawFilePermissions: "0744"}}},
|
||||
},
|
||||
{
|
||||
contents: "#cloud-config\nwrite_files:\n - permissions: 744",
|
||||
config: CloudConfig{WriteFiles: []File{File{RawFilePermissions: "744"}}},
|
||||
},
|
||||
{
|
||||
contents: "#cloud-config\nwrite_files:\n - permissions: '0744'",
|
||||
config: CloudConfig{WriteFiles: []File{File{RawFilePermissions: "0744"}}},
|
||||
},
|
||||
{
|
||||
contents: "#cloud-config\nwrite_files:\n - permissions: '744'",
|
||||
config: CloudConfig{WriteFiles: []File{File{RawFilePermissions: "744"}}},
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
config, err := NewCloudConfig(tt.contents)
|
||||
if err != nil {
|
||||
t.Errorf("bad error (test case #%d): want %v, got %s", i, nil, err)
|
||||
}
|
||||
if !reflect.DeepEqual(&tt.config, config) {
|
||||
t.Errorf("bad config (test case #%d): want %#v, got %#v", i, tt.config, config)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsZero(t *testing.T) {
|
||||
tests := []struct {
|
||||
c interface{}
|
||||
|
||||
empty bool
|
||||
}{
|
||||
{struct{}{}, true},
|
||||
{struct{ a, b string }{}, true},
|
||||
{struct{ A, b string }{}, true},
|
||||
{struct{ A, B string }{}, true},
|
||||
{struct{ A string }{A: "hello"}, false},
|
||||
{struct{ A int }{}, true},
|
||||
{struct{ A int }{A: 1}, false},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
if empty := IsZero(tt.c); tt.empty != empty {
|
||||
t.Errorf("bad result (%q): want %t, got %t", tt.c, tt.empty, empty)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestAssertStructValid(t *testing.T) {
|
||||
tests := []struct {
|
||||
c interface{}
|
||||
|
||||
err error
|
||||
}{
|
||||
{struct{}{}, nil},
|
||||
{struct {
|
||||
A, b string `valid:"^1|2$"`
|
||||
}{}, nil},
|
||||
{struct {
|
||||
A, b string `valid:"^1|2$"`
|
||||
}{A: "1", b: "2"}, nil},
|
||||
{struct {
|
||||
A, b string `valid:"^1|2$"`
|
||||
}{A: "1", b: "hello"}, nil},
|
||||
{struct {
|
||||
A, b string `valid:"^1|2$"`
|
||||
}{A: "hello", b: "2"}, &ErrorValid{Value: "hello", Field: "A", Valid: "^1|2$"}},
|
||||
{struct {
|
||||
A, b int `valid:"^1|2$"`
|
||||
}{}, nil},
|
||||
{struct {
|
||||
A, b int `valid:"^1|2$"`
|
||||
}{A: 1, b: 2}, nil},
|
||||
{struct {
|
||||
A, b int `valid:"^1|2$"`
|
||||
}{A: 1, b: 9}, nil},
|
||||
{struct {
|
||||
A, b int `valid:"^1|2$"`
|
||||
}{A: 9, b: 2}, &ErrorValid{Value: "9", Field: "A", Valid: "^1|2$"}},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
if err := AssertStructValid(tt.c); !reflect.DeepEqual(tt.err, err) {
|
||||
t.Errorf("bad result (%q): want %q, got %q", tt.c, tt.err, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigCompile(t *testing.T) {
|
||||
tests := []interface{}{
|
||||
Etcd{},
|
||||
File{},
|
||||
Flannel{},
|
||||
Fleet{},
|
||||
Locksmith{},
|
||||
OEM{},
|
||||
Unit{},
|
||||
Update{},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
ttt := reflect.TypeOf(tt)
|
||||
for i := 0; i < ttt.NumField(); i++ {
|
||||
ft := ttt.Field(i)
|
||||
if !isFieldExported(ft) {
|
||||
continue
|
||||
}
|
||||
|
||||
if _, err := regexp.Compile(ft.Tag.Get("valid")); err != nil {
|
||||
t.Errorf("bad regexp(%s.%s): want %v, got %s", ttt.Name(), ft.Name, nil, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestCloudConfigUnknownKeys(t *testing.T) {
|
||||
contents := `
|
||||
coreos:
|
||||
etcd:
|
||||
discovery: "https://discovery.etcd.io/827c73219eeb2fa5530027c37bf18877"
|
||||
coreos_unknown:
|
||||
foo: "bar"
|
||||
section_unknown:
|
||||
dunno:
|
||||
something
|
||||
bare_unknown:
|
||||
bar
|
||||
write_files:
|
||||
- content: fun
|
||||
path: /var/party
|
||||
file_unknown: nofun
|
||||
users:
|
||||
- name: fry
|
||||
passwd: somehash
|
||||
user_unknown: philip
|
||||
hostname:
|
||||
foo
|
||||
`
|
||||
cfg, err := NewCloudConfig(contents)
|
||||
if err != nil {
|
||||
t.Fatalf("error instantiating CloudConfig with unknown keys: %v", err)
|
||||
}
|
||||
if cfg.Hostname != "foo" {
|
||||
t.Fatalf("hostname not correctly set when invalid keys are present")
|
||||
}
|
||||
if cfg.CoreOS.Etcd.Discovery != "https://discovery.etcd.io/827c73219eeb2fa5530027c37bf18877" {
|
||||
t.Fatalf("etcd section not correctly set when invalid keys are present")
|
||||
}
|
||||
if len(cfg.WriteFiles) < 1 || cfg.WriteFiles[0].Content != "fun" || cfg.WriteFiles[0].Path != "/var/party" {
|
||||
t.Fatalf("write_files section not correctly set when invalid keys are present")
|
||||
}
|
||||
if len(cfg.Users) < 1 || cfg.Users[0].Name != "fry" || cfg.Users[0].PasswordHash != "somehash" {
|
||||
t.Fatalf("users section not correctly set when invalid keys are present")
|
||||
}
|
||||
}
|
||||
|
||||
// Assert that the parsing of a cloud config file "generally works"
|
||||
func TestCloudConfigEmpty(t *testing.T) {
|
||||
cfg, err := NewCloudConfig("")
|
||||
if err != nil {
|
||||
t.Fatalf("Encountered unexpected error :%v", err)
|
||||
}
|
||||
|
||||
keys := cfg.SSHAuthorizedKeys
|
||||
if len(keys) != 0 {
|
||||
t.Error("Parsed incorrect number of SSH keys")
|
||||
}
|
||||
|
||||
if len(cfg.WriteFiles) != 0 {
|
||||
t.Error("Expected zero WriteFiles")
|
||||
}
|
||||
|
||||
if cfg.Hostname != "" {
|
||||
t.Errorf("Expected hostname to be empty, got '%s'", cfg.Hostname)
|
||||
}
|
||||
}
|
||||
|
||||
// Assert that the parsing of a cloud config file "generally works"
|
||||
func TestCloudConfig(t *testing.T) {
|
||||
contents := `
|
||||
coreos:
|
||||
etcd:
|
||||
discovery: "https://discovery.etcd.io/827c73219eeb2fa5530027c37bf18877"
|
||||
update:
|
||||
reboot_strategy: reboot
|
||||
units:
|
||||
- name: 50-eth0.network
|
||||
runtime: yes
|
||||
content: '[Match]
|
||||
|
||||
Name=eth47
|
||||
|
||||
|
||||
[Network]
|
||||
|
||||
Address=10.209.171.177/19
|
||||
|
||||
'
|
||||
oem:
|
||||
id: rackspace
|
||||
name: Rackspace Cloud Servers
|
||||
version_id: 168.0.0
|
||||
home_url: https://www.rackspace.com/cloud/servers/
|
||||
bug_report_url: https://github.com/coreos/coreos-overlay
|
||||
ssh_authorized_keys:
|
||||
- foobar
|
||||
- foobaz
|
||||
write_files:
|
||||
- content: |
|
||||
penny
|
||||
elroy
|
||||
path: /etc/dogepack.conf
|
||||
permissions: '0644'
|
||||
owner: root:dogepack
|
||||
hostname: trontastic
|
||||
`
|
||||
cfg, err := NewCloudConfig(contents)
|
||||
if err != nil {
|
||||
t.Fatalf("Encountered unexpected error :%v", err)
|
||||
}
|
||||
|
||||
keys := cfg.SSHAuthorizedKeys
|
||||
if len(keys) != 2 {
|
||||
t.Error("Parsed incorrect number of SSH keys")
|
||||
} else if keys[0] != "foobar" {
|
||||
t.Error("Expected first SSH key to be 'foobar'")
|
||||
} else if keys[1] != "foobaz" {
|
||||
t.Error("Expected first SSH key to be 'foobaz'")
|
||||
}
|
||||
|
||||
if len(cfg.WriteFiles) != 1 {
|
||||
t.Error("Failed to parse correct number of write_files")
|
||||
} else {
|
||||
wf := cfg.WriteFiles[0]
|
||||
if wf.Content != "penny\nelroy\n" {
|
||||
t.Errorf("WriteFile has incorrect contents '%s'", wf.Content)
|
||||
}
|
||||
if wf.Encoding != "" {
|
||||
t.Errorf("WriteFile has incorrect encoding %s", wf.Encoding)
|
||||
}
|
||||
if wf.RawFilePermissions != "0644" {
|
||||
t.Errorf("WriteFile has incorrect permissions %s", wf.RawFilePermissions)
|
||||
}
|
||||
if wf.Path != "/etc/dogepack.conf" {
|
||||
t.Errorf("WriteFile has incorrect path %s", wf.Path)
|
||||
}
|
||||
if wf.Owner != "root:dogepack" {
|
||||
t.Errorf("WriteFile has incorrect owner %s", wf.Owner)
|
||||
}
|
||||
}
|
||||
|
||||
if len(cfg.CoreOS.Units) != 1 {
|
||||
t.Error("Failed to parse correct number of units")
|
||||
} else {
|
||||
u := cfg.CoreOS.Units[0]
|
||||
expect := `[Match]
|
||||
Name=eth47
|
||||
|
||||
[Network]
|
||||
Address=10.209.171.177/19
|
||||
`
|
||||
if u.Content != expect {
|
||||
t.Errorf("Unit has incorrect contents '%s'.\nExpected '%s'.", u.Content, expect)
|
||||
}
|
||||
if u.Runtime != true {
|
||||
t.Errorf("Unit has incorrect runtime value")
|
||||
}
|
||||
if u.Name != "50-eth0.network" {
|
||||
t.Errorf("Unit has incorrect name %s", u.Name)
|
||||
}
|
||||
}
|
||||
|
||||
if cfg.CoreOS.OEM.ID != "rackspace" {
|
||||
t.Errorf("Failed parsing coreos.oem. Expected ID 'rackspace', got %q.", cfg.CoreOS.OEM.ID)
|
||||
}
|
||||
|
||||
if cfg.Hostname != "trontastic" {
|
||||
t.Errorf("Failed to parse hostname")
|
||||
}
|
||||
if cfg.CoreOS.Update.RebootStrategy != "reboot" {
|
||||
t.Errorf("Failed to parse locksmith strategy")
|
||||
}
|
||||
}
|
||||
|
||||
// Assert that our interface conversion doesn't panic
|
||||
func TestCloudConfigKeysNotList(t *testing.T) {
|
||||
contents := `
|
||||
ssh_authorized_keys:
|
||||
- foo: bar
|
||||
`
|
||||
cfg, err := NewCloudConfig(contents)
|
||||
if err != nil {
|
||||
t.Fatalf("Encountered unexpected error: %v", err)
|
||||
}
|
||||
|
||||
keys := cfg.SSHAuthorizedKeys
|
||||
if len(keys) != 0 {
|
||||
t.Error("Parsed incorrect number of SSH keys")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCloudConfigSerializationHeader(t *testing.T) {
|
||||
cfg, _ := NewCloudConfig("")
|
||||
contents := cfg.String()
|
||||
header := strings.SplitN(contents, "\n", 2)[0]
|
||||
if header != "#cloud-config" {
|
||||
t.Fatalf("Serialized config did not have expected header")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCloudConfigUsers(t *testing.T) {
|
||||
contents := `
|
||||
users:
|
||||
- name: elroy
|
||||
passwd: somehash
|
||||
ssh_authorized_keys:
|
||||
- somekey
|
||||
gecos: arbitrary comment
|
||||
homedir: /home/place
|
||||
no_create_home: yes
|
||||
lock_passwd: false
|
||||
primary_group: things
|
||||
groups:
|
||||
- ping
|
||||
- pong
|
||||
no_user_group: true
|
||||
system: y
|
||||
no_log_init: True
|
||||
shell: /bin/sh
|
||||
`
|
||||
cfg, err := NewCloudConfig(contents)
|
||||
if err != nil {
|
||||
t.Fatalf("Encountered unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if len(cfg.Users) != 1 {
|
||||
t.Fatalf("Parsed %d users, expected 1", len(cfg.Users))
|
||||
}
|
||||
|
||||
user := cfg.Users[0]
|
||||
|
||||
if user.Name != "elroy" {
|
||||
t.Errorf("User name is %q, expected 'elroy'", user.Name)
|
||||
}
|
||||
|
||||
if user.PasswordHash != "somehash" {
|
||||
t.Errorf("User passwd is %q, expected 'somehash'", user.PasswordHash)
|
||||
}
|
||||
|
||||
if keys := user.SSHAuthorizedKeys; len(keys) != 1 {
|
||||
t.Errorf("Parsed %d ssh keys, expected 1", len(keys))
|
||||
} else {
|
||||
key := user.SSHAuthorizedKeys[0]
|
||||
if key != "somekey" {
|
||||
t.Errorf("User SSH key is %q, expected 'somekey'", key)
|
||||
}
|
||||
}
|
||||
|
||||
if user.GECOS != "arbitrary comment" {
|
||||
t.Errorf("Failed to parse gecos field, got %q", user.GECOS)
|
||||
}
|
||||
|
||||
if user.Homedir != "/home/place" {
|
||||
t.Errorf("Failed to parse homedir field, got %q", user.Homedir)
|
||||
}
|
||||
|
||||
if !user.NoCreateHome {
|
||||
t.Errorf("Failed to parse no_create_home field")
|
||||
}
|
||||
|
||||
if user.PrimaryGroup != "things" {
|
||||
t.Errorf("Failed to parse primary_group field, got %q", user.PrimaryGroup)
|
||||
}
|
||||
|
||||
if len(user.Groups) != 2 {
|
||||
t.Errorf("Failed to parse 2 goups, got %d", len(user.Groups))
|
||||
} else {
|
||||
if user.Groups[0] != "ping" {
|
||||
t.Errorf("First group was %q, not expected value 'ping'", user.Groups[0])
|
||||
}
|
||||
if user.Groups[1] != "pong" {
|
||||
t.Errorf("First group was %q, not expected value 'pong'", user.Groups[1])
|
||||
}
|
||||
}
|
||||
|
||||
if !user.NoUserGroup {
|
||||
t.Errorf("Failed to parse no_user_group field")
|
||||
}
|
||||
|
||||
if !user.System {
|
||||
t.Errorf("Failed to parse system field")
|
||||
}
|
||||
|
||||
if !user.NoLogInit {
|
||||
t.Errorf("Failed to parse no_log_init field")
|
||||
}
|
||||
|
||||
if user.Shell != "/bin/sh" {
|
||||
t.Errorf("Failed to parse shell field, got %q", user.Shell)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCloudConfigUsersGithubUser(t *testing.T) {
|
||||
|
||||
contents := `
|
||||
users:
|
||||
- name: elroy
|
||||
coreos_ssh_import_github: bcwaldon
|
||||
`
|
||||
cfg, err := NewCloudConfig(contents)
|
||||
if err != nil {
|
||||
t.Fatalf("Encountered unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if len(cfg.Users) != 1 {
|
||||
t.Fatalf("Parsed %d users, expected 1", len(cfg.Users))
|
||||
}
|
||||
|
||||
user := cfg.Users[0]
|
||||
|
||||
if user.Name != "elroy" {
|
||||
t.Errorf("User name is %q, expected 'elroy'", user.Name)
|
||||
}
|
||||
|
||||
if user.SSHImportGithubUser != "bcwaldon" {
|
||||
t.Errorf("github user is %q, expected 'bcwaldon'", user.SSHImportGithubUser)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCloudConfigUsersSSHImportURL(t *testing.T) {
|
||||
contents := `
|
||||
users:
|
||||
- name: elroy
|
||||
coreos_ssh_import_url: https://token:x-auth-token@github.enterprise.com/api/v3/polvi/keys
|
||||
`
|
||||
cfg, err := NewCloudConfig(contents)
|
||||
if err != nil {
|
||||
t.Fatalf("Encountered unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if len(cfg.Users) != 1 {
|
||||
t.Fatalf("Parsed %d users, expected 1", len(cfg.Users))
|
||||
}
|
||||
|
||||
user := cfg.Users[0]
|
||||
|
||||
if user.Name != "elroy" {
|
||||
t.Errorf("User name is %q, expected 'elroy'", user.Name)
|
||||
}
|
||||
|
||||
if user.SSHImportURL != "https://token:x-auth-token@github.enterprise.com/api/v3/polvi/keys" {
|
||||
t.Errorf("ssh import url is %q, expected 'https://token:x-auth-token@github.enterprise.com/api/v3/polvi/keys'", user.SSHImportURL)
|
||||
}
|
||||
}
|
@ -1,56 +0,0 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"encoding/base64"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
func DecodeBase64Content(content string) ([]byte, error) {
|
||||
output, err := base64.StdEncoding.DecodeString(content)
|
||||
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Unable to decode base64: %q", err)
|
||||
}
|
||||
|
||||
return output, nil
|
||||
}
|
||||
|
||||
func DecodeGzipContent(content string) ([]byte, error) {
|
||||
gzr, err := gzip.NewReader(bytes.NewReader([]byte(content)))
|
||||
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Unable to decode gzip: %q", err)
|
||||
}
|
||||
defer gzr.Close()
|
||||
|
||||
buf := new(bytes.Buffer)
|
||||
buf.ReadFrom(gzr)
|
||||
|
||||
return buf.Bytes(), nil
|
||||
}
|
||||
|
||||
func DecodeContent(content string, encoding string) ([]byte, error) {
|
||||
switch encoding {
|
||||
case "":
|
||||
return []byte(content), nil
|
||||
|
||||
case "b64", "base64":
|
||||
return DecodeBase64Content(content)
|
||||
|
||||
case "gz", "gzip":
|
||||
return DecodeGzipContent(content)
|
||||
|
||||
case "gz+base64", "gzip+base64", "gz+b64", "gzip+b64":
|
||||
gz, err := DecodeBase64Content(content)
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return DecodeGzipContent(string(gz))
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("Unsupported encoding %q", encoding)
|
||||
}
|
@ -1,17 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
type EtcHosts string
|
@ -1,67 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
type Etcd struct {
|
||||
Addr string `yaml:"addr" env:"ETCD_ADDR"`
|
||||
AdvertiseClientURLs string `yaml:"advertise_client_urls" env:"ETCD_ADVERTISE_CLIENT_URLS" deprecated:"etcd2 options no longer work for etcd"`
|
||||
BindAddr string `yaml:"bind_addr" env:"ETCD_BIND_ADDR"`
|
||||
CAFile string `yaml:"ca_file" env:"ETCD_CA_FILE"`
|
||||
CertFile string `yaml:"cert_file" env:"ETCD_CERT_FILE"`
|
||||
ClusterActiveSize int `yaml:"cluster_active_size" env:"ETCD_CLUSTER_ACTIVE_SIZE"`
|
||||
ClusterRemoveDelay float64 `yaml:"cluster_remove_delay" env:"ETCD_CLUSTER_REMOVE_DELAY"`
|
||||
ClusterSyncInterval float64 `yaml:"cluster_sync_interval" env:"ETCD_CLUSTER_SYNC_INTERVAL"`
|
||||
CorsOrigins string `yaml:"cors" env:"ETCD_CORS"`
|
||||
DataDir string `yaml:"data_dir" env:"ETCD_DATA_DIR"`
|
||||
Discovery string `yaml:"discovery" env:"ETCD_DISCOVERY"`
|
||||
DiscoveryFallback string `yaml:"discovery_fallback" env:"ETCD_DISCOVERY_FALLBACK" deprecated:"etcd2 options no longer work for etcd"`
|
||||
DiscoverySRV string `yaml:"discovery_srv" env:"ETCD_DISCOVERY_SRV" deprecated:"etcd2 options no longer work for etcd"`
|
||||
DiscoveryProxy string `yaml:"discovery_proxy" env:"ETCD_DISCOVERY_PROXY" deprecated:"etcd2 options no longer work for etcd"`
|
||||
ElectionTimeout int `yaml:"election_timeout" env:"ETCD_ELECTION_TIMEOUT" deprecated:"etcd2 options no longer work for etcd"`
|
||||
ForceNewCluster bool `yaml:"force_new_cluster" env:"ETCD_FORCE_NEW_CLUSTER" deprecated:"etcd2 options no longer work for etcd"`
|
||||
GraphiteHost string `yaml:"graphite_host" env:"ETCD_GRAPHITE_HOST"`
|
||||
HeartbeatInterval int `yaml:"heartbeat_interval" env:"ETCD_HEARTBEAT_INTERVAL" deprecated:"etcd2 options no longer work for etcd"`
|
||||
HTTPReadTimeout float64 `yaml:"http_read_timeout" env:"ETCD_HTTP_READ_TIMEOUT"`
|
||||
HTTPWriteTimeout float64 `yaml:"http_write_timeout" env:"ETCD_HTTP_WRITE_TIMEOUT"`
|
||||
InitialAdvertisePeerURLs string `yaml:"initial_advertise_peer_urls" env:"ETCD_INITIAL_ADVERTISE_PEER_URLS" deprecated:"etcd2 options no longer work for etcd"`
|
||||
InitialCluster string `yaml:"initial_cluster" env:"ETCD_INITIAL_CLUSTER" deprecated:"etcd2 options no longer work for etcd"`
|
||||
InitialClusterState string `yaml:"initial_cluster_state" env:"ETCD_INITIAL_CLUSTER_STATE" deprecated:"etcd2 options no longer work for etcd"`
|
||||
InitialClusterToken string `yaml:"initial_cluster_token" env:"ETCD_INITIAL_CLUSTER_TOKEN" deprecated:"etcd2 options no longer work for etcd"`
|
||||
KeyFile string `yaml:"key_file" env:"ETCD_KEY_FILE"`
|
||||
ListenClientURLs string `yaml:"listen_client_urls" env:"ETCD_LISTEN_CLIENT_URLS" deprecated:"etcd2 options no longer work for etcd"`
|
||||
ListenPeerURLs string `yaml:"listen_peer_urls" env:"ETCD_LISTEN_PEER_URLS" deprecated:"etcd2 options no longer work for etcd"`
|
||||
MaxResultBuffer int `yaml:"max_result_buffer" env:"ETCD_MAX_RESULT_BUFFER"`
|
||||
MaxRetryAttempts int `yaml:"max_retry_attempts" env:"ETCD_MAX_RETRY_ATTEMPTS"`
|
||||
MaxSnapshots int `yaml:"max_snapshots" env:"ETCD_MAX_SNAPSHOTS" deprecated:"etcd2 options no longer work for etcd"`
|
||||
MaxWALs int `yaml:"max_wals" env:"ETCD_MAX_WALS" deprecated:"etcd2 options no longer work for etcd"`
|
||||
Name string `yaml:"name" env:"ETCD_NAME"`
|
||||
PeerAddr string `yaml:"peer_addr" env:"ETCD_PEER_ADDR"`
|
||||
PeerBindAddr string `yaml:"peer_bind_addr" env:"ETCD_PEER_BIND_ADDR"`
|
||||
PeerCAFile string `yaml:"peer_ca_file" env:"ETCD_PEER_CA_FILE"`
|
||||
PeerCertFile string `yaml:"peer_cert_file" env:"ETCD_PEER_CERT_FILE"`
|
||||
PeerElectionTimeout int `yaml:"peer_election_timeout" env:"ETCD_PEER_ELECTION_TIMEOUT"`
|
||||
PeerHeartbeatInterval int `yaml:"peer_heartbeat_interval" env:"ETCD_PEER_HEARTBEAT_INTERVAL"`
|
||||
PeerKeyFile string `yaml:"peer_key_file" env:"ETCD_PEER_KEY_FILE"`
|
||||
Peers string `yaml:"peers" env:"ETCD_PEERS"`
|
||||
PeersFile string `yaml:"peers_file" env:"ETCD_PEERS_FILE"`
|
||||
Proxy string `yaml:"proxy" env:"ETCD_PROXY" deprecated:"etcd2 options no longer work for etcd"`
|
||||
RetryInterval float64 `yaml:"retry_interval" env:"ETCD_RETRY_INTERVAL"`
|
||||
Snapshot bool `yaml:"snapshot" env:"ETCD_SNAPSHOT"`
|
||||
SnapshotCount int `yaml:"snapshot_count" env:"ETCD_SNAPSHOTCOUNT"`
|
||||
StrTrace string `yaml:"trace" env:"ETCD_TRACE"`
|
||||
Verbose bool `yaml:"verbose" env:"ETCD_VERBOSE"`
|
||||
VeryVerbose bool `yaml:"very_verbose" env:"ETCD_VERY_VERBOSE"`
|
||||
VeryVeryVerbose bool `yaml:"very_very_verbose" env:"ETCD_VERY_VERY_VERBOSE"`
|
||||
}
|
@ -1,57 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
type Etcd2 struct {
|
||||
AdvertiseClientURLs string `yaml:"advertise_client_urls" env:"ETCD_ADVERTISE_CLIENT_URLS"`
|
||||
CAFile string `yaml:"ca_file" env:"ETCD_CA_FILE" deprecated:"ca_file obsoleted by trusted_ca_file and client_cert_auth"`
|
||||
CertFile string `yaml:"cert_file" env:"ETCD_CERT_FILE"`
|
||||
ClientCertAuth bool `yaml:"client_cert_auth" env:"ETCD_CLIENT_CERT_AUTH"`
|
||||
CorsOrigins string `yaml:"cors" env:"ETCD_CORS"`
|
||||
DataDir string `yaml:"data_dir" env:"ETCD_DATA_DIR"`
|
||||
Debug bool `yaml:"debug" env:"ETCD_DEBUG"`
|
||||
Discovery string `yaml:"discovery" env:"ETCD_DISCOVERY"`
|
||||
DiscoveryFallback string `yaml:"discovery_fallback" env:"ETCD_DISCOVERY_FALLBACK"`
|
||||
DiscoverySRV string `yaml:"discovery_srv" env:"ETCD_DISCOVERY_SRV"`
|
||||
DiscoveryProxy string `yaml:"discovery_proxy" env:"ETCD_DISCOVERY_PROXY"`
|
||||
ElectionTimeout int `yaml:"election_timeout" env:"ETCD_ELECTION_TIMEOUT"`
|
||||
ForceNewCluster bool `yaml:"force_new_cluster" env:"ETCD_FORCE_NEW_CLUSTER"`
|
||||
HeartbeatInterval int `yaml:"heartbeat_interval" env:"ETCD_HEARTBEAT_INTERVAL"`
|
||||
InitialAdvertisePeerURLs string `yaml:"initial_advertise_peer_urls" env:"ETCD_INITIAL_ADVERTISE_PEER_URLS"`
|
||||
InitialCluster string `yaml:"initial_cluster" env:"ETCD_INITIAL_CLUSTER"`
|
||||
InitialClusterState string `yaml:"initial_cluster_state" env:"ETCD_INITIAL_CLUSTER_STATE"`
|
||||
InitialClusterToken string `yaml:"initial_cluster_token" env:"ETCD_INITIAL_CLUSTER_TOKEN"`
|
||||
KeyFile string `yaml:"key_file" env:"ETCD_KEY_FILE"`
|
||||
ListenClientURLs string `yaml:"listen_client_urls" env:"ETCD_LISTEN_CLIENT_URLS"`
|
||||
ListenPeerURLs string `yaml:"listen_peer_urls" env:"ETCD_LISTEN_PEER_URLS"`
|
||||
LogPackageLevels string `yaml:"log_package_levels" env:"ETCD_LOG_PACKAGE_LEVELS"`
|
||||
MaxSnapshots int `yaml:"max_snapshots" env:"ETCD_MAX_SNAPSHOTS"`
|
||||
MaxWALs int `yaml:"max_wals" env:"ETCD_MAX_WALS"`
|
||||
Name string `yaml:"name" env:"ETCD_NAME"`
|
||||
PeerCAFile string `yaml:"peer_ca_file" env:"ETCD_PEER_CA_FILE" deprecated:"peer_ca_file obsoleted peer_trusted_ca_file and peer_client_cert_auth"`
|
||||
PeerCertFile string `yaml:"peer_cert_file" env:"ETCD_PEER_CERT_FILE"`
|
||||
PeerKeyFile string `yaml:"peer_key_file" env:"ETCD_PEER_KEY_FILE"`
|
||||
PeerClientCertAuth bool `yaml:"peer_client_cert_auth" env:"ETCD_PEER_CLIENT_CERT_AUTH"`
|
||||
PeerTrustedCAFile string `yaml:"peer_trusted_ca_file" env:"ETCD_PEER_TRUSTED_CA_FILE"`
|
||||
Proxy string `yaml:"proxy" env:"ETCD_PROXY" valid:"^(on|off|readonly)$"`
|
||||
ProxyDialTimeout int `yaml:"proxy_dial_timeout" env:"ETCD_PROXY_DIAL_TIMEOUT"`
|
||||
ProxyFailureWait int `yaml:"proxy_failure_wait" env:"ETCD_PROXY_FAILURE_WAIT"`
|
||||
ProxyReadTimeout int `yaml:"proxy_read_timeout" env:"ETCD_PROXY_READ_TIMEOUT"`
|
||||
ProxyRefreshInterval int `yaml:"proxy_refresh_interval" env:"ETCD_PROXY_REFRESH_INTERVAL"`
|
||||
ProxyWriteTimeout int `yaml:"proxy_write_timeout" env:"ETCD_PROXY_WRITE_TIMEOUT"`
|
||||
SnapshotCount int `yaml:"snapshot_count" env:"ETCD_SNAPSHOT_COUNT"`
|
||||
TrustedCAFile string `yaml:"trusted_ca_file" env:"ETCD_TRUSTED_CA_FILE"`
|
||||
WalDir string `yaml:"wal_dir" env:"ETCD_WAL_DIR"`
|
||||
}
|
@ -1,23 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
type File struct {
|
||||
Encoding string `yaml:"encoding" valid:"^(base64|b64|gz|gzip|gz\\+base64|gzip\\+base64|gz\\+b64|gzip\\+b64)$"`
|
||||
Content string `yaml:"content"`
|
||||
Owner string `yaml:"owner"`
|
||||
Path string `yaml:"path"`
|
||||
RawFilePermissions string `yaml:"permissions" valid:"^0?[0-7]{3,4}$"`
|
||||
}
|
@ -1,69 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestEncodingValid(t *testing.T) {
|
||||
tests := []struct {
|
||||
value string
|
||||
|
||||
isValid bool
|
||||
}{
|
||||
{value: "base64", isValid: true},
|
||||
{value: "b64", isValid: true},
|
||||
{value: "gz", isValid: true},
|
||||
{value: "gzip", isValid: true},
|
||||
{value: "gz+base64", isValid: true},
|
||||
{value: "gzip+base64", isValid: true},
|
||||
{value: "gz+b64", isValid: true},
|
||||
{value: "gzip+b64", isValid: true},
|
||||
{value: "gzzzzbase64", isValid: false},
|
||||
{value: "gzipppbase64", isValid: false},
|
||||
{value: "unknown", isValid: false},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
isValid := (nil == AssertStructValid(File{Encoding: tt.value}))
|
||||
if tt.isValid != isValid {
|
||||
t.Errorf("bad assert (%s): want %t, got %t", tt.value, tt.isValid, isValid)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestRawFilePermissionsValid(t *testing.T) {
|
||||
tests := []struct {
|
||||
value string
|
||||
|
||||
isValid bool
|
||||
}{
|
||||
{value: "744", isValid: true},
|
||||
{value: "0744", isValid: true},
|
||||
{value: "1744", isValid: true},
|
||||
{value: "01744", isValid: true},
|
||||
{value: "11744", isValid: false},
|
||||
{value: "rwxr--r--", isValid: false},
|
||||
{value: "800", isValid: false},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
isValid := (nil == AssertStructValid(File{RawFilePermissions: tt.value}))
|
||||
if tt.isValid != isValid {
|
||||
t.Errorf("bad assert (%s): want %t, got %t", tt.value, tt.isValid, isValid)
|
||||
}
|
||||
}
|
||||
}
|
@ -1,27 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
type Flannel struct {
|
||||
EtcdEndpoints string `yaml:"etcd_endpoints" env:"FLANNELD_ETCD_ENDPOINTS"`
|
||||
EtcdCAFile string `yaml:"etcd_cafile" env:"FLANNELD_ETCD_CAFILE"`
|
||||
EtcdCertFile string `yaml:"etcd_certfile" env:"FLANNELD_ETCD_CERTFILE"`
|
||||
EtcdKeyFile string `yaml:"etcd_keyfile" env:"FLANNELD_ETCD_KEYFILE"`
|
||||
EtcdPrefix string `yaml:"etcd_prefix" env:"FLANNELD_ETCD_PREFIX"`
|
||||
IPMasq string `yaml:"ip_masq" env:"FLANNELD_IP_MASQ"`
|
||||
SubnetFile string `yaml:"subnet_file" env:"FLANNELD_SUBNET_FILE"`
|
||||
Iface string `yaml:"interface" env:"FLANNELD_IFACE"`
|
||||
PublicIP string `yaml:"public_ip" env:"FLANNELD_PUBLIC_IP"`
|
||||
}
|
@ -1,33 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
type Fleet struct {
|
||||
AgentTTL string `yaml:"agent_ttl" env:"FLEET_AGENT_TTL"`
|
||||
AuthorizedKeysFile string `yaml:"authorized_keys_file" env:"FLEET_AUTHORIZED_KEYS_FILE"`
|
||||
DisableEngine bool `yaml:"disable_engine" env:"FLEET_DISABLE_ENGINE"`
|
||||
EngineReconcileInterval float64 `yaml:"engine_reconcile_interval" env:"FLEET_ENGINE_RECONCILE_INTERVAL"`
|
||||
EtcdCAFile string `yaml:"etcd_cafile" env:"FLEET_ETCD_CAFILE"`
|
||||
EtcdCertFile string `yaml:"etcd_certfile" env:"FLEET_ETCD_CERTFILE"`
|
||||
EtcdKeyFile string `yaml:"etcd_keyfile" env:"FLEET_ETCD_KEYFILE"`
|
||||
EtcdKeyPrefix string `yaml:"etcd_key_prefix" env:"FLEET_ETCD_KEY_PREFIX"`
|
||||
EtcdRequestTimeout float64 `yaml:"etcd_request_timeout" env:"FLEET_ETCD_REQUEST_TIMEOUT"`
|
||||
EtcdServers string `yaml:"etcd_servers" env:"FLEET_ETCD_SERVERS"`
|
||||
Metadata string `yaml:"metadata" env:"FLEET_METADATA"`
|
||||
PublicIP string `yaml:"public_ip" env:"FLEET_PUBLIC_IP"`
|
||||
TokenLimit int `yaml:"token_limit" env:"FLEET_TOKEN_LIMIT"`
|
||||
Verbosity int `yaml:"verbosity" env:"FLEET_VERBOSITY"`
|
||||
VerifyUnits bool `yaml:"verify_units" env:"FLEET_VERIFY_UNITS"`
|
||||
}
|
@ -1,26 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
)
|
||||
|
||||
func IsIgnitionConfig(userdata string) bool {
|
||||
var cfg struct {
|
||||
Version *int `json:"ignitionVersion" yaml:"ignition_version"`
|
||||
}
|
||||
return (json.Unmarshal([]byte(userdata), &cfg) == nil && cfg.Version != nil)
|
||||
}
|
@ -1,25 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
type Locksmith struct {
|
||||
Endpoint string `yaml:"endpoint" env:"LOCKSMITHD_ENDPOINT"`
|
||||
EtcdCAFile string `yaml:"etcd_cafile" env:"LOCKSMITHD_ETCD_CAFILE"`
|
||||
EtcdCertFile string `yaml:"etcd_certfile" env:"LOCKSMITHD_ETCD_CERTFILE"`
|
||||
EtcdKeyFile string `yaml:"etcd_keyfile" env:"LOCKSMITHD_ETCD_KEYFILE"`
|
||||
Group string `yaml:"group" env:"LOCKSMITHD_GROUP"`
|
||||
RebootWindowStart string `yaml:"window_start" env:"REBOOT_WINDOW_START" valid:"^((?i:sun|mon|tue|wed|thu|fri|sat|sun) )?0*([0-9]|1[0-9]|2[0-3]):0*([0-9]|[1-5][0-9])$"`
|
||||
RebootWindowLength string `yaml:"window_length" env:"REBOOT_WINDOW_LENGTH" valid:"^[-+]?([0-9]*(\\.[0-9]*)?[a-z]+)+$"`
|
||||
}
|
@ -1,76 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestRebootWindowStart(t *testing.T) {
|
||||
tests := []struct {
|
||||
value string
|
||||
|
||||
isValid bool
|
||||
}{
|
||||
{value: "Sun 0:0", isValid: true},
|
||||
{value: "Sun 00:00", isValid: true},
|
||||
{value: "sUn 23:59", isValid: true},
|
||||
{value: "mon 0:0", isValid: true},
|
||||
{value: "tue 0:0", isValid: true},
|
||||
{value: "tues 0:0", isValid: false},
|
||||
{value: "wed 0:0", isValid: true},
|
||||
{value: "thu 0:0", isValid: true},
|
||||
{value: "thur 0:0", isValid: false},
|
||||
{value: "fri 0:0", isValid: true},
|
||||
{value: "sat 0:0", isValid: true},
|
||||
{value: "sat00:00", isValid: false},
|
||||
{value: "00:00", isValid: true},
|
||||
{value: "10:10", isValid: true},
|
||||
{value: "20:20", isValid: true},
|
||||
{value: "20:30", isValid: true},
|
||||
{value: "20:40", isValid: true},
|
||||
{value: "20:50", isValid: true},
|
||||
{value: "20:60", isValid: false},
|
||||
{value: "24:00", isValid: false},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
isValid := (nil == AssertStructValid(Locksmith{RebootWindowStart: tt.value}))
|
||||
if tt.isValid != isValid {
|
||||
t.Errorf("bad assert (%s): want %t, got %t", tt.value, tt.isValid, isValid)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestRebootWindowLength(t *testing.T) {
|
||||
tests := []struct {
|
||||
value string
|
||||
|
||||
isValid bool
|
||||
}{
|
||||
{value: "1h", isValid: true},
|
||||
{value: "1d", isValid: true},
|
||||
{value: "0d", isValid: true},
|
||||
{value: "0.5h", isValid: true},
|
||||
{value: "0.5.0h", isValid: false},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
isValid := (nil == AssertStructValid(Locksmith{RebootWindowLength: tt.value}))
|
||||
if tt.isValid != isValid {
|
||||
t.Errorf("bad assert (%s): want %t, got %t", tt.value, tt.isValid, isValid)
|
||||
}
|
||||
}
|
||||
}
|
@ -1,23 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
type OEM struct {
|
||||
ID string `yaml:"id"`
|
||||
Name string `yaml:"name"`
|
||||
VersionID string `yaml:"version_id"`
|
||||
HomeURL string `yaml:"home_url"`
|
||||
BugReportURL string `yaml:"bug_report_url"`
|
||||
}
|
@ -1,31 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
type Script []byte
|
||||
|
||||
func IsScript(userdata string) bool {
|
||||
header := strings.SplitN(userdata, "\n", 2)[0]
|
||||
return strings.HasPrefix(header, "#!")
|
||||
}
|
||||
|
||||
func NewScript(userdata string) (*Script, error) {
|
||||
s := Script(userdata)
|
||||
return &s, nil
|
||||
}
|
@ -1,7 +0,0 @@
|
||||
package config
|
||||
|
||||
type SystemInfo struct {
|
||||
DefaultUser struct {
|
||||
Name string `yaml:"name"`
|
||||
} `yaml:"default_user"`
|
||||
}
|
@ -1,30 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
type Unit struct {
|
||||
Name string `yaml:"name"`
|
||||
Mask bool `yaml:"mask"`
|
||||
Enable bool `yaml:"enable"`
|
||||
Runtime bool `yaml:"runtime"`
|
||||
Content string `yaml:"content"`
|
||||
Command string `yaml:"command" valid:"^(start|stop|restart|reload|try-restart|reload-or-restart|reload-or-try-restart)$"`
|
||||
DropIns []UnitDropIn `yaml:"drop_ins"`
|
||||
}
|
||||
|
||||
type UnitDropIn struct {
|
||||
Name string `yaml:"name"`
|
||||
Content string `yaml:"content"`
|
||||
}
|
@ -1,46 +0,0 @@
|
||||
/*
|
||||
Copyright 2014 CoreOS, Inc.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestCommandValid(t *testing.T) {
|
||||
tests := []struct {
|
||||
value string
|
||||
|
||||
isValid bool
|
||||
}{
|
||||
{value: "start", isValid: true},
|
||||
{value: "stop", isValid: true},
|
||||
{value: "restart", isValid: true},
|
||||
{value: "reload", isValid: true},
|
||||
{value: "try-restart", isValid: true},
|
||||
{value: "reload-or-restart", isValid: true},
|
||||
{value: "reload-or-try-restart", isValid: true},
|
||||
{value: "tryrestart", isValid: false},
|
||||
{value: "unknown", isValid: false},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
isValid := (nil == AssertStructValid(Unit{Command: tt.value}))
|
||||
if tt.isValid != isValid {
|
||||
t.Errorf("bad assert (%s): want %t, got %t", tt.value, tt.isValid, isValid)
|
||||
}
|
||||
}
|
||||
}
|
@ -1,21 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
type Update struct {
|
||||
RebootStrategy string `yaml:"reboot_strategy" env:"REBOOT_STRATEGY" valid:"^(best-effort|etcd-lock|reboot|off)$"`
|
||||
Group string `yaml:"group" env:"GROUP"`
|
||||
Server string `yaml:"server" env:"SERVER"`
|
||||
}
|
@ -1,43 +0,0 @@
|
||||
/*
|
||||
Copyright 2014 CoreOS, Inc.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestRebootStrategyValid(t *testing.T) {
|
||||
tests := []struct {
|
||||
value string
|
||||
|
||||
isValid bool
|
||||
}{
|
||||
{value: "best-effort", isValid: true},
|
||||
{value: "etcd-lock", isValid: true},
|
||||
{value: "reboot", isValid: true},
|
||||
{value: "off", isValid: true},
|
||||
{value: "besteffort", isValid: false},
|
||||
{value: "unknown", isValid: false},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
isValid := (nil == AssertStructValid(Update{RebootStrategy: tt.value}))
|
||||
if tt.isValid != isValid {
|
||||
t.Errorf("bad assert (%s): want %t, got %t", tt.value, tt.isValid, isValid)
|
||||
}
|
||||
}
|
||||
}
|
@ -1,34 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
type User struct {
|
||||
Name string `yaml:"name"`
|
||||
PasswordHash string `yaml:"passwd"`
|
||||
SSHAuthorizedKeys []string `yaml:"ssh_authorized_keys"`
|
||||
SSHImportGithubUser string `yaml:"coreos_ssh_import_github" deprecated:"trying to fetch from a remote endpoint introduces too many intermittent errors"`
|
||||
SSHImportGithubUsers []string `yaml:"coreos_ssh_import_github_users" deprecated:"trying to fetch from a remote endpoint introduces too many intermittent errors"`
|
||||
SSHImportURL string `yaml:"coreos_ssh_import_url" deprecated:"trying to fetch from a remote endpoint introduces too many intermittent errors"`
|
||||
GECOS string `yaml:"gecos"`
|
||||
Homedir string `yaml:"homedir"`
|
||||
NoCreateHome bool `yaml:"no_create_home"`
|
||||
PrimaryGroup string `yaml:"primary_group"`
|
||||
Groups []string `yaml:"groups"`
|
||||
NoUserGroup bool `yaml:"no_user_group"`
|
||||
System bool `yaml:"system"`
|
||||
NoLogInit bool `yaml:"no_log_init"`
|
||||
LockPasswd bool `yaml:"lock_passwd"`
|
||||
Shell string `yaml:"shell"`
|
||||
}
|
@ -1,52 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package validate
|
||||
|
||||
import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
// context represents the current position within a newline-delimited string.
|
||||
// Each line is loaded, one by one, into currentLine (newline omitted) and
|
||||
// lineNumber keeps track of its position within the original string.
|
||||
type context struct {
|
||||
currentLine string
|
||||
remainingLines string
|
||||
lineNumber int
|
||||
}
|
||||
|
||||
// Increment moves the context to the next line (if available).
|
||||
func (c *context) Increment() {
|
||||
if c.currentLine == "" && c.remainingLines == "" {
|
||||
return
|
||||
}
|
||||
|
||||
lines := strings.SplitN(c.remainingLines, "\n", 2)
|
||||
c.currentLine = lines[0]
|
||||
if len(lines) == 2 {
|
||||
c.remainingLines = lines[1]
|
||||
} else {
|
||||
c.remainingLines = ""
|
||||
}
|
||||
c.lineNumber++
|
||||
}
|
||||
|
||||
// NewContext creates a context from the provided data. It strips out all
|
||||
// carriage returns and moves to the first line (if available).
|
||||
func NewContext(content []byte) context {
|
||||
c := context{remainingLines: strings.Replace(string(content), "\r", "", -1)}
|
||||
c.Increment()
|
||||
return c
|
||||
}
|
@ -1,131 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package validate
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestNewContext(t *testing.T) {
|
||||
tests := []struct {
|
||||
in string
|
||||
|
||||
out context
|
||||
}{
|
||||
{
|
||||
out: context{
|
||||
currentLine: "",
|
||||
remainingLines: "",
|
||||
lineNumber: 0,
|
||||
},
|
||||
},
|
||||
{
|
||||
in: "this\r\nis\r\na\r\ntest",
|
||||
out: context{
|
||||
currentLine: "this",
|
||||
remainingLines: "is\na\ntest",
|
||||
lineNumber: 1,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
if out := NewContext([]byte(tt.in)); !reflect.DeepEqual(tt.out, out) {
|
||||
t.Errorf("bad context (%q): want %#v, got %#v", tt.in, tt.out, out)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestIncrement(t *testing.T) {
|
||||
tests := []struct {
|
||||
init context
|
||||
op func(c *context)
|
||||
|
||||
res context
|
||||
}{
|
||||
{
|
||||
init: context{
|
||||
currentLine: "",
|
||||
remainingLines: "",
|
||||
lineNumber: 0,
|
||||
},
|
||||
res: context{
|
||||
currentLine: "",
|
||||
remainingLines: "",
|
||||
lineNumber: 0,
|
||||
},
|
||||
op: func(c *context) {
|
||||
c.Increment()
|
||||
},
|
||||
},
|
||||
{
|
||||
init: context{
|
||||
currentLine: "test",
|
||||
remainingLines: "",
|
||||
lineNumber: 1,
|
||||
},
|
||||
res: context{
|
||||
currentLine: "",
|
||||
remainingLines: "",
|
||||
lineNumber: 2,
|
||||
},
|
||||
op: func(c *context) {
|
||||
c.Increment()
|
||||
c.Increment()
|
||||
c.Increment()
|
||||
},
|
||||
},
|
||||
{
|
||||
init: context{
|
||||
currentLine: "this",
|
||||
remainingLines: "is\na\ntest",
|
||||
lineNumber: 1,
|
||||
},
|
||||
res: context{
|
||||
currentLine: "is",
|
||||
remainingLines: "a\ntest",
|
||||
lineNumber: 2,
|
||||
},
|
||||
op: func(c *context) {
|
||||
c.Increment()
|
||||
},
|
||||
},
|
||||
{
|
||||
init: context{
|
||||
currentLine: "this",
|
||||
remainingLines: "is\na\ntest",
|
||||
lineNumber: 1,
|
||||
},
|
||||
res: context{
|
||||
currentLine: "test",
|
||||
remainingLines: "",
|
||||
lineNumber: 4,
|
||||
},
|
||||
op: func(c *context) {
|
||||
c.Increment()
|
||||
c.Increment()
|
||||
c.Increment()
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
res := tt.init
|
||||
if tt.op(&res); !reflect.DeepEqual(tt.res, res) {
|
||||
t.Errorf("bad context (%d, %#v): want %#v, got %#v", i, tt.init, tt.res, res)
|
||||
}
|
||||
}
|
||||
}
|
@ -1,157 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package validate
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"regexp"
|
||||
)
|
||||
|
||||
var (
|
||||
yamlKey = regexp.MustCompile(`^ *-? ?(?P<key>.*?):`)
|
||||
yamlElem = regexp.MustCompile(`^ *-`)
|
||||
)
|
||||
|
||||
type node struct {
|
||||
name string
|
||||
line int
|
||||
children []node
|
||||
field reflect.StructField
|
||||
reflect.Value
|
||||
}
|
||||
|
||||
// Child attempts to find the child with the given name in the node's list of
|
||||
// children. If no such child is found, an invalid node is returned.
|
||||
func (n node) Child(name string) node {
|
||||
for _, c := range n.children {
|
||||
if c.name == name {
|
||||
return c
|
||||
}
|
||||
}
|
||||
return node{}
|
||||
}
|
||||
|
||||
// HumanType returns the human-consumable string representation of the type of
|
||||
// the node.
|
||||
func (n node) HumanType() string {
|
||||
switch k := n.Kind(); k {
|
||||
case reflect.Slice:
|
||||
c := n.Type().Elem()
|
||||
return "[]" + node{Value: reflect.New(c).Elem()}.HumanType()
|
||||
default:
|
||||
return k.String()
|
||||
}
|
||||
}
|
||||
|
||||
// NewNode returns the node representation of the given value. The context
|
||||
// will be used in an attempt to determine line numbers for the given value.
|
||||
func NewNode(value interface{}, context context) node {
|
||||
var n node
|
||||
toNode(value, context, &n)
|
||||
return n
|
||||
}
|
||||
|
||||
// toNode converts the given value into a node and then recursively processes
|
||||
// each of the nodes components (e.g. fields, array elements, keys).
|
||||
func toNode(v interface{}, c context, n *node) {
|
||||
vv := reflect.ValueOf(v)
|
||||
if !vv.IsValid() {
|
||||
return
|
||||
}
|
||||
|
||||
n.Value = vv
|
||||
switch vv.Kind() {
|
||||
case reflect.Struct:
|
||||
// Walk over each field in the structure, skipping unexported fields,
|
||||
// and create a node for it.
|
||||
for i := 0; i < vv.Type().NumField(); i++ {
|
||||
ft := vv.Type().Field(i)
|
||||
k := ft.Tag.Get("yaml")
|
||||
if k == "-" || k == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
cn := node{name: k, field: ft}
|
||||
c, ok := findKey(cn.name, c)
|
||||
if ok {
|
||||
cn.line = c.lineNumber
|
||||
}
|
||||
toNode(vv.Field(i).Interface(), c, &cn)
|
||||
n.children = append(n.children, cn)
|
||||
}
|
||||
case reflect.Map:
|
||||
// Walk over each key in the map and create a node for it.
|
||||
v := v.(map[interface{}]interface{})
|
||||
for k, cv := range v {
|
||||
cn := node{name: fmt.Sprintf("%s", k)}
|
||||
c, ok := findKey(cn.name, c)
|
||||
if ok {
|
||||
cn.line = c.lineNumber
|
||||
}
|
||||
toNode(cv, c, &cn)
|
||||
n.children = append(n.children, cn)
|
||||
}
|
||||
case reflect.Slice:
|
||||
// Walk over each element in the slice and create a node for it.
|
||||
// While iterating over the slice, preserve the context after it
|
||||
// is modified. This allows the line numbers to reflect the current
|
||||
// element instead of the first.
|
||||
for i := 0; i < vv.Len(); i++ {
|
||||
cn := node{
|
||||
name: fmt.Sprintf("%s[%d]", n.name, i),
|
||||
field: n.field,
|
||||
}
|
||||
var ok bool
|
||||
c, ok = findElem(c)
|
||||
if ok {
|
||||
cn.line = c.lineNumber
|
||||
}
|
||||
toNode(vv.Index(i).Interface(), c, &cn)
|
||||
n.children = append(n.children, cn)
|
||||
c.Increment()
|
||||
}
|
||||
case reflect.String, reflect.Int, reflect.Bool, reflect.Float64:
|
||||
default:
|
||||
panic(fmt.Sprintf("toNode(): unhandled kind %s", vv.Kind()))
|
||||
}
|
||||
}
|
||||
|
||||
// findKey attempts to find the requested key within the provided context.
|
||||
// A modified copy of the context is returned with every line up to the key
|
||||
// incremented past. A boolean, true if the key was found, is also returned.
|
||||
func findKey(key string, context context) (context, bool) {
|
||||
return find(yamlKey, key, context)
|
||||
}
|
||||
|
||||
// findElem attempts to find an array element within the provided context.
|
||||
// A modified copy of the context is returned with every line up to the array
|
||||
// element incremented past. A boolean, true if the key was found, is also
|
||||
// returned.
|
||||
func findElem(context context) (context, bool) {
|
||||
return find(yamlElem, "", context)
|
||||
}
|
||||
|
||||
func find(exp *regexp.Regexp, key string, context context) (context, bool) {
|
||||
for len(context.currentLine) > 0 || len(context.remainingLines) > 0 {
|
||||
matches := exp.FindStringSubmatch(context.currentLine)
|
||||
if len(matches) > 0 && (key == "" || matches[1] == key) {
|
||||
return context, true
|
||||
}
|
||||
|
||||
context.Increment()
|
||||
}
|
||||
return context, false
|
||||
}
|
@ -1,284 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package validate
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestChild(t *testing.T) {
|
||||
tests := []struct {
|
||||
parent node
|
||||
name string
|
||||
|
||||
child node
|
||||
}{
|
||||
{},
|
||||
{
|
||||
name: "c1",
|
||||
},
|
||||
{
|
||||
parent: node{
|
||||
children: []node{
|
||||
node{name: "c1"},
|
||||
node{name: "c2"},
|
||||
node{name: "c3"},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
parent: node{
|
||||
children: []node{
|
||||
node{name: "c1"},
|
||||
node{name: "c2"},
|
||||
node{name: "c3"},
|
||||
},
|
||||
},
|
||||
name: "c2",
|
||||
child: node{name: "c2"},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
if child := tt.parent.Child(tt.name); !reflect.DeepEqual(tt.child, child) {
|
||||
t.Errorf("bad child (%q): want %#v, got %#v", tt.name, tt.child, child)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestHumanType(t *testing.T) {
|
||||
tests := []struct {
|
||||
node node
|
||||
|
||||
humanType string
|
||||
}{
|
||||
{
|
||||
humanType: "invalid",
|
||||
},
|
||||
{
|
||||
node: node{Value: reflect.ValueOf("hello")},
|
||||
humanType: "string",
|
||||
},
|
||||
{
|
||||
node: node{
|
||||
Value: reflect.ValueOf([]int{1, 2}),
|
||||
children: []node{
|
||||
node{Value: reflect.ValueOf(1)},
|
||||
node{Value: reflect.ValueOf(2)},
|
||||
}},
|
||||
humanType: "[]int",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
if humanType := tt.node.HumanType(); tt.humanType != humanType {
|
||||
t.Errorf("bad type (%q): want %q, got %q", tt.node, tt.humanType, humanType)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestToNode(t *testing.T) {
|
||||
tests := []struct {
|
||||
value interface{}
|
||||
context context
|
||||
|
||||
node node
|
||||
}{
|
||||
{},
|
||||
{
|
||||
value: struct{}{},
|
||||
node: node{Value: reflect.ValueOf(struct{}{})},
|
||||
},
|
||||
{
|
||||
value: struct {
|
||||
A int `yaml:"a"`
|
||||
}{},
|
||||
node: node{
|
||||
children: []node{
|
||||
node{
|
||||
name: "a",
|
||||
field: reflect.TypeOf(struct {
|
||||
A int `yaml:"a"`
|
||||
}{}).Field(0),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
value: struct {
|
||||
A []int `yaml:"a"`
|
||||
}{},
|
||||
node: node{
|
||||
children: []node{
|
||||
node{
|
||||
name: "a",
|
||||
field: reflect.TypeOf(struct {
|
||||
A []int `yaml:"a"`
|
||||
}{}).Field(0),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
value: map[interface{}]interface{}{
|
||||
"a": map[interface{}]interface{}{
|
||||
"b": 2,
|
||||
},
|
||||
},
|
||||
context: NewContext([]byte("a:\n b: 2")),
|
||||
node: node{
|
||||
children: []node{
|
||||
node{
|
||||
line: 1,
|
||||
name: "a",
|
||||
children: []node{
|
||||
node{name: "b", line: 2},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
value: struct {
|
||||
A struct {
|
||||
Jon bool `yaml:"b"`
|
||||
} `yaml:"a"`
|
||||
}{},
|
||||
node: node{
|
||||
children: []node{
|
||||
node{
|
||||
name: "a",
|
||||
children: []node{
|
||||
node{
|
||||
name: "b",
|
||||
field: reflect.TypeOf(struct {
|
||||
Jon bool `yaml:"b"`
|
||||
}{}).Field(0),
|
||||
Value: reflect.ValueOf(false),
|
||||
},
|
||||
},
|
||||
field: reflect.TypeOf(struct {
|
||||
A struct {
|
||||
Jon bool `yaml:"b"`
|
||||
} `yaml:"a"`
|
||||
}{}).Field(0),
|
||||
Value: reflect.ValueOf(struct {
|
||||
Jon bool `yaml:"b"`
|
||||
}{}),
|
||||
},
|
||||
},
|
||||
Value: reflect.ValueOf(struct {
|
||||
A struct {
|
||||
Jon bool `yaml:"b"`
|
||||
} `yaml:"a"`
|
||||
}{}),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
var node node
|
||||
toNode(tt.value, tt.context, &node)
|
||||
if !nodesEqual(tt.node, node) {
|
||||
t.Errorf("bad node (%#v): want %#v, got %#v", tt.value, tt.node, node)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFindKey(t *testing.T) {
|
||||
tests := []struct {
|
||||
key string
|
||||
context context
|
||||
|
||||
found bool
|
||||
}{
|
||||
{},
|
||||
{
|
||||
key: "key1",
|
||||
context: NewContext([]byte("key1: hi")),
|
||||
found: true,
|
||||
},
|
||||
{
|
||||
key: "key2",
|
||||
context: NewContext([]byte("key1: hi")),
|
||||
found: false,
|
||||
},
|
||||
{
|
||||
key: "key3",
|
||||
context: NewContext([]byte("key1:\n key2:\n key3: hi")),
|
||||
found: true,
|
||||
},
|
||||
{
|
||||
key: "key4",
|
||||
context: NewContext([]byte("key1:\n - key4: hi")),
|
||||
found: true,
|
||||
},
|
||||
{
|
||||
key: "key5",
|
||||
context: NewContext([]byte("#key5")),
|
||||
found: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
if _, found := findKey(tt.key, tt.context); tt.found != found {
|
||||
t.Errorf("bad find (%q): want %t, got %t", tt.key, tt.found, found)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFindElem(t *testing.T) {
|
||||
tests := []struct {
|
||||
context context
|
||||
|
||||
found bool
|
||||
}{
|
||||
{},
|
||||
{
|
||||
context: NewContext([]byte("test: hi")),
|
||||
found: false,
|
||||
},
|
||||
{
|
||||
context: NewContext([]byte("test:\n - a\n -b")),
|
||||
found: true,
|
||||
},
|
||||
{
|
||||
context: NewContext([]byte("test:\n -\n a")),
|
||||
found: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
if _, found := findElem(tt.context); tt.found != found {
|
||||
t.Errorf("bad find (%q): want %t, got %t", tt.context, tt.found, found)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func nodesEqual(a, b node) bool {
|
||||
if a.name != b.name ||
|
||||
a.line != b.line ||
|
||||
!reflect.DeepEqual(a.field, b.field) ||
|
||||
len(a.children) != len(b.children) {
|
||||
return false
|
||||
}
|
||||
for i := 0; i < len(a.children); i++ {
|
||||
if !nodesEqual(a.children[i], b.children[i]) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
@ -1,88 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package validate
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// Report represents the list of entries resulting from validation.
|
||||
type Report struct {
|
||||
entries []Entry
|
||||
}
|
||||
|
||||
// Error adds an error entry to the report.
|
||||
func (r *Report) Error(line int, message string) {
|
||||
r.entries = append(r.entries, Entry{entryError, message, line})
|
||||
}
|
||||
|
||||
// Warning adds a warning entry to the report.
|
||||
func (r *Report) Warning(line int, message string) {
|
||||
r.entries = append(r.entries, Entry{entryWarning, message, line})
|
||||
}
|
||||
|
||||
// Info adds an info entry to the report.
|
||||
func (r *Report) Info(line int, message string) {
|
||||
r.entries = append(r.entries, Entry{entryInfo, message, line})
|
||||
}
|
||||
|
||||
// Entries returns the list of entries in the report.
|
||||
func (r *Report) Entries() []Entry {
|
||||
return r.entries
|
||||
}
|
||||
|
||||
// Entry represents a single generic item in the report.
|
||||
type Entry struct {
|
||||
kind entryKind
|
||||
message string
|
||||
line int
|
||||
}
|
||||
|
||||
// String returns a human-readable representation of the entry.
|
||||
func (e Entry) String() string {
|
||||
return fmt.Sprintf("line %d: %s: %s", e.line, e.kind, e.message)
|
||||
}
|
||||
|
||||
// MarshalJSON satisfies the json.Marshaler interface, returning the entry
|
||||
// encoded as a JSON object.
|
||||
func (e Entry) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(map[string]interface{}{
|
||||
"kind": e.kind.String(),
|
||||
"message": e.message,
|
||||
"line": e.line,
|
||||
})
|
||||
}
|
||||
|
||||
type entryKind int
|
||||
|
||||
const (
|
||||
entryError entryKind = iota
|
||||
entryWarning
|
||||
entryInfo
|
||||
)
|
||||
|
||||
func (k entryKind) String() string {
|
||||
switch k {
|
||||
case entryError:
|
||||
return "error"
|
||||
case entryWarning:
|
||||
return "warning"
|
||||
case entryInfo:
|
||||
return "info"
|
||||
default:
|
||||
panic(fmt.Sprintf("invalid kind %d", k))
|
||||
}
|
||||
}
|
@ -1,96 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package validate
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestEntry(t *testing.T) {
|
||||
tests := []struct {
|
||||
entry Entry
|
||||
|
||||
str string
|
||||
json []byte
|
||||
}{
|
||||
{
|
||||
Entry{entryInfo, "test info", 1},
|
||||
"line 1: info: test info",
|
||||
[]byte(`{"kind":"info","line":1,"message":"test info"}`),
|
||||
},
|
||||
{
|
||||
Entry{entryWarning, "test warning", 1},
|
||||
"line 1: warning: test warning",
|
||||
[]byte(`{"kind":"warning","line":1,"message":"test warning"}`),
|
||||
},
|
||||
{
|
||||
Entry{entryError, "test error", 2},
|
||||
"line 2: error: test error",
|
||||
[]byte(`{"kind":"error","line":2,"message":"test error"}`),
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
if str := tt.entry.String(); tt.str != str {
|
||||
t.Errorf("bad string (%q): want %q, got %q", tt.entry, tt.str, str)
|
||||
}
|
||||
json, err := tt.entry.MarshalJSON()
|
||||
if err != nil {
|
||||
t.Errorf("bad error (%q): want %v, got %q", tt.entry, nil, err)
|
||||
}
|
||||
if !bytes.Equal(tt.json, json) {
|
||||
t.Errorf("bad JSON (%q): want %q, got %q", tt.entry, tt.json, json)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestReport(t *testing.T) {
|
||||
type reportFunc struct {
|
||||
fn func(*Report, int, string)
|
||||
line int
|
||||
message string
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
fs []reportFunc
|
||||
|
||||
es []Entry
|
||||
}{
|
||||
{
|
||||
[]reportFunc{
|
||||
{(*Report).Warning, 1, "test warning 1"},
|
||||
{(*Report).Error, 2, "test error 2"},
|
||||
{(*Report).Info, 10, "test info 10"},
|
||||
},
|
||||
[]Entry{
|
||||
Entry{entryWarning, "test warning 1", 1},
|
||||
Entry{entryError, "test error 2", 2},
|
||||
Entry{entryInfo, "test info 10", 10},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
r := Report{}
|
||||
for _, f := range tt.fs {
|
||||
f.fn(&r, f.line, f.message)
|
||||
}
|
||||
if es := r.Entries(); !reflect.DeepEqual(tt.es, es) {
|
||||
t.Errorf("bad entries (%v): want %#v, got %#v", tt.fs, tt.es, es)
|
||||
}
|
||||
}
|
||||
}
|
@ -1,180 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package validate
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/url"
|
||||
"path"
|
||||
"reflect"
|
||||
"strings"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/config"
|
||||
)
|
||||
|
||||
type rule func(config node, report *Report)
|
||||
|
||||
// Rules contains all of the validation rules.
|
||||
var Rules []rule = []rule{
|
||||
checkDiscoveryUrl,
|
||||
checkEncoding,
|
||||
checkStructure,
|
||||
checkValidity,
|
||||
checkWriteFiles,
|
||||
checkWriteFilesUnderCoreos,
|
||||
}
|
||||
|
||||
// checkDiscoveryUrl verifies that the string is a valid url.
|
||||
func checkDiscoveryUrl(cfg node, report *Report) {
|
||||
c := cfg.Child("coreos").Child("etcd").Child("discovery")
|
||||
if !c.IsValid() {
|
||||
return
|
||||
}
|
||||
|
||||
if _, err := url.ParseRequestURI(c.String()); err != nil {
|
||||
report.Warning(c.line, "discovery URL is not valid")
|
||||
}
|
||||
}
|
||||
|
||||
// checkEncoding validates that, for each file under 'write_files', the
|
||||
// content can be decoded given the specified encoding.
|
||||
func checkEncoding(cfg node, report *Report) {
|
||||
for _, f := range cfg.Child("write_files").children {
|
||||
e := f.Child("encoding")
|
||||
if !e.IsValid() {
|
||||
continue
|
||||
}
|
||||
|
||||
c := f.Child("content")
|
||||
if _, err := config.DecodeContent(c.String(), e.String()); err != nil {
|
||||
report.Error(c.line, fmt.Sprintf("content cannot be decoded as %q", e.String()))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// checkStructure compares the provided config to the empty config.CloudConfig
|
||||
// structure. Each node is checked to make sure that it exists in the known
|
||||
// structure and that its type is compatible.
|
||||
func checkStructure(cfg node, report *Report) {
|
||||
g := NewNode(config.CloudConfig{}, NewContext([]byte{}))
|
||||
checkNodeStructure(cfg, g, report)
|
||||
}
|
||||
|
||||
func checkNodeStructure(n, g node, r *Report) {
|
||||
if !isCompatible(n.Kind(), g.Kind()) {
|
||||
r.Warning(n.line, fmt.Sprintf("incorrect type for %q (want %s)", n.name, g.HumanType()))
|
||||
return
|
||||
}
|
||||
|
||||
switch g.Kind() {
|
||||
case reflect.Struct:
|
||||
for _, cn := range n.children {
|
||||
if cg := g.Child(cn.name); cg.IsValid() {
|
||||
if msg := cg.field.Tag.Get("deprecated"); msg != "" {
|
||||
r.Warning(cn.line, fmt.Sprintf("deprecated key %q (%s)", cn.name, msg))
|
||||
}
|
||||
checkNodeStructure(cn, cg, r)
|
||||
} else {
|
||||
r.Warning(cn.line, fmt.Sprintf("unrecognized key %q", cn.name))
|
||||
}
|
||||
}
|
||||
case reflect.Slice:
|
||||
for _, cn := range n.children {
|
||||
var cg node
|
||||
c := g.Type().Elem()
|
||||
toNode(reflect.New(c).Elem().Interface(), context{}, &cg)
|
||||
checkNodeStructure(cn, cg, r)
|
||||
}
|
||||
case reflect.String, reflect.Int, reflect.Float64, reflect.Bool:
|
||||
default:
|
||||
panic(fmt.Sprintf("checkNodeStructure(): unhandled kind %s", g.Kind()))
|
||||
}
|
||||
}
|
||||
|
||||
// isCompatible determines if the type of kind n can be converted to the type
|
||||
// of kind g in the context of YAML. This is not an exhaustive list, but its
|
||||
// enough for the purposes of cloud-config validation.
|
||||
func isCompatible(n, g reflect.Kind) bool {
|
||||
switch g {
|
||||
case reflect.String:
|
||||
return n == reflect.String || n == reflect.Int || n == reflect.Float64 || n == reflect.Bool
|
||||
case reflect.Struct:
|
||||
return n == reflect.Struct || n == reflect.Map
|
||||
case reflect.Float64:
|
||||
return n == reflect.Float64 || n == reflect.Int
|
||||
case reflect.Bool, reflect.Slice, reflect.Int:
|
||||
return n == g
|
||||
default:
|
||||
panic(fmt.Sprintf("isCompatible(): unhandled kind %s", g))
|
||||
}
|
||||
}
|
||||
|
||||
// checkValidity checks the value of every node in the provided config by
|
||||
// running config.AssertValid() on it.
|
||||
func checkValidity(cfg node, report *Report) {
|
||||
g := NewNode(config.CloudConfig{}, NewContext([]byte{}))
|
||||
checkNodeValidity(cfg, g, report)
|
||||
}
|
||||
|
||||
func checkNodeValidity(n, g node, r *Report) {
|
||||
if err := config.AssertValid(n.Value, g.field.Tag.Get("valid")); err != nil {
|
||||
r.Error(n.line, fmt.Sprintf("invalid value %v", n.Value.Interface()))
|
||||
}
|
||||
switch g.Kind() {
|
||||
case reflect.Struct:
|
||||
for _, cn := range n.children {
|
||||
if cg := g.Child(cn.name); cg.IsValid() {
|
||||
checkNodeValidity(cn, cg, r)
|
||||
}
|
||||
}
|
||||
case reflect.Slice:
|
||||
for _, cn := range n.children {
|
||||
var cg node
|
||||
c := g.Type().Elem()
|
||||
toNode(reflect.New(c).Elem().Interface(), context{}, &cg)
|
||||
checkNodeValidity(cn, cg, r)
|
||||
}
|
||||
case reflect.String, reflect.Int, reflect.Float64, reflect.Bool:
|
||||
default:
|
||||
panic(fmt.Sprintf("checkNodeValidity(): unhandled kind %s", g.Kind()))
|
||||
}
|
||||
}
|
||||
|
||||
// checkWriteFiles checks to make sure that the target file can actually be
|
||||
// written. Note that this check is approximate (it only checks to see if the file
|
||||
// is under /usr).
|
||||
func checkWriteFiles(cfg node, report *Report) {
|
||||
for _, f := range cfg.Child("write_files").children {
|
||||
c := f.Child("path")
|
||||
if !c.IsValid() {
|
||||
continue
|
||||
}
|
||||
|
||||
d := path.Dir(c.String())
|
||||
switch {
|
||||
case strings.HasPrefix(d, "/usr"):
|
||||
report.Error(c.line, "file cannot be written to a read-only filesystem")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// checkWriteFilesUnderCoreos checks to see if the 'write_files' node is a
|
||||
// child of 'coreos' (it shouldn't be).
|
||||
func checkWriteFilesUnderCoreos(cfg node, report *Report) {
|
||||
c := cfg.Child("coreos").Child("write_files")
|
||||
if c.IsValid() {
|
||||
report.Info(c.line, "write_files doesn't belong under coreos")
|
||||
}
|
||||
}
|
@ -1,408 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package validate
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestCheckDiscoveryUrl(t *testing.T) {
|
||||
tests := []struct {
|
||||
config string
|
||||
|
||||
entries []Entry
|
||||
}{
|
||||
{},
|
||||
{
|
||||
config: "coreos:\n etcd:\n discovery: https://discovery.etcd.io/00000000000000000000000000000000",
|
||||
},
|
||||
{
|
||||
config: "coreos:\n etcd:\n discovery: http://custom.domain/mytoken",
|
||||
},
|
||||
{
|
||||
config: "coreos:\n etcd:\n discovery: disco",
|
||||
entries: []Entry{{entryWarning, "discovery URL is not valid", 3}},
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
r := Report{}
|
||||
n, err := parseCloudConfig([]byte(tt.config), &r)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
checkDiscoveryUrl(n, &r)
|
||||
|
||||
if e := r.Entries(); !reflect.DeepEqual(tt.entries, e) {
|
||||
t.Errorf("bad report (%d, %q): want %#v, got %#v", i, tt.config, tt.entries, e)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestCheckEncoding(t *testing.T) {
|
||||
tests := []struct {
|
||||
config string
|
||||
|
||||
entries []Entry
|
||||
}{
|
||||
{},
|
||||
{
|
||||
config: "write_files:\n - encoding: base64\n content: aGVsbG8K",
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - content: !!binary aGVsbG8K",
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - encoding: base64\n content: !!binary aGVsbG8K",
|
||||
entries: []Entry{{entryError, `content cannot be decoded as "base64"`, 3}},
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - encoding: base64\n content: !!binary YUdWc2JHOEsK",
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - encoding: gzip\n content: !!binary H4sIAOC3tVQAA8tIzcnJ5wIAIDA6NgYAAAA=",
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - encoding: gzip+base64\n content: H4sIAOC3tVQAA8tIzcnJ5wIAIDA6NgYAAAA=",
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - encoding: custom\n content: hello",
|
||||
entries: []Entry{{entryError, `content cannot be decoded as "custom"`, 3}},
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
r := Report{}
|
||||
n, err := parseCloudConfig([]byte(tt.config), &r)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
checkEncoding(n, &r)
|
||||
|
||||
if e := r.Entries(); !reflect.DeepEqual(tt.entries, e) {
|
||||
t.Errorf("bad report (%d, %q): want %#v, got %#v", i, tt.config, tt.entries, e)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestCheckStructure(t *testing.T) {
|
||||
tests := []struct {
|
||||
config string
|
||||
|
||||
entries []Entry
|
||||
}{
|
||||
{},
|
||||
|
||||
// Test for unrecognized keys
|
||||
{
|
||||
config: "test:",
|
||||
entries: []Entry{{entryWarning, "unrecognized key \"test\"", 1}},
|
||||
},
|
||||
{
|
||||
config: "coreos:\n etcd:\n bad:",
|
||||
entries: []Entry{{entryWarning, "unrecognized key \"bad\"", 3}},
|
||||
},
|
||||
{
|
||||
config: "coreos:\n etcd:\n discovery: good",
|
||||
},
|
||||
|
||||
// Test for deprecated keys
|
||||
{
|
||||
config: "coreos:\n etcd:\n addr: hi",
|
||||
},
|
||||
{
|
||||
config: "coreos:\n etcd:\n proxy: hi",
|
||||
entries: []Entry{{entryWarning, "deprecated key \"proxy\" (etcd2 options no longer work for etcd)", 3}},
|
||||
},
|
||||
|
||||
// Test for error on list of nodes
|
||||
{
|
||||
config: "coreos:\n units:\n - hello\n - goodbye",
|
||||
entries: []Entry{
|
||||
{entryWarning, "incorrect type for \"units[0]\" (want struct)", 3},
|
||||
{entryWarning, "incorrect type for \"units[1]\" (want struct)", 4},
|
||||
},
|
||||
},
|
||||
|
||||
// Test for incorrect types
|
||||
// Want boolean
|
||||
{
|
||||
config: "coreos:\n units:\n - enable: true",
|
||||
},
|
||||
{
|
||||
config: "coreos:\n units:\n - enable: 4",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"enable\" (want bool)", 3}},
|
||||
},
|
||||
{
|
||||
config: "coreos:\n units:\n - enable: bad",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"enable\" (want bool)", 3}},
|
||||
},
|
||||
{
|
||||
config: "coreos:\n units:\n - enable:\n bad:",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"enable\" (want bool)", 3}},
|
||||
},
|
||||
{
|
||||
config: "coreos:\n units:\n - enable:\n - bad",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"enable\" (want bool)", 3}},
|
||||
},
|
||||
// Want string
|
||||
{
|
||||
config: "hostname: true",
|
||||
},
|
||||
{
|
||||
config: "hostname: 4",
|
||||
},
|
||||
{
|
||||
config: "hostname: host",
|
||||
},
|
||||
{
|
||||
config: "hostname:\n name:",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"hostname\" (want string)", 1}},
|
||||
},
|
||||
{
|
||||
config: "hostname:\n - name",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"hostname\" (want string)", 1}},
|
||||
},
|
||||
// Want struct
|
||||
{
|
||||
config: "coreos: true",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"coreos\" (want struct)", 1}},
|
||||
},
|
||||
{
|
||||
config: "coreos: 4",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"coreos\" (want struct)", 1}},
|
||||
},
|
||||
{
|
||||
config: "coreos: hello",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"coreos\" (want struct)", 1}},
|
||||
},
|
||||
{
|
||||
config: "coreos:\n etcd:\n discovery: fire in the disco",
|
||||
},
|
||||
{
|
||||
config: "coreos:\n - hello",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"coreos\" (want struct)", 1}},
|
||||
},
|
||||
// Want []string
|
||||
{
|
||||
config: "ssh_authorized_keys: true",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"ssh_authorized_keys\" (want []string)", 1}},
|
||||
},
|
||||
{
|
||||
config: "ssh_authorized_keys: 4",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"ssh_authorized_keys\" (want []string)", 1}},
|
||||
},
|
||||
{
|
||||
config: "ssh_authorized_keys: key",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"ssh_authorized_keys\" (want []string)", 1}},
|
||||
},
|
||||
{
|
||||
config: "ssh_authorized_keys:\n key: value",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"ssh_authorized_keys\" (want []string)", 1}},
|
||||
},
|
||||
{
|
||||
config: "ssh_authorized_keys:\n - key",
|
||||
},
|
||||
{
|
||||
config: "ssh_authorized_keys:\n - key: value",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"ssh_authorized_keys[0]\" (want string)", 2}},
|
||||
},
|
||||
// Want []struct
|
||||
{
|
||||
config: "users:\n true",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"users\" (want []struct)", 1}},
|
||||
},
|
||||
{
|
||||
config: "users:\n 4",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"users\" (want []struct)", 1}},
|
||||
},
|
||||
{
|
||||
config: "users:\n bad",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"users\" (want []struct)", 1}},
|
||||
},
|
||||
{
|
||||
config: "users:\n bad:",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"users\" (want []struct)", 1}},
|
||||
},
|
||||
{
|
||||
config: "users:\n - name: good",
|
||||
},
|
||||
// Want struct within array
|
||||
{
|
||||
config: "users:\n - true",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"users[0]\" (want struct)", 2}},
|
||||
},
|
||||
{
|
||||
config: "users:\n - name: hi\n - true",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"users[1]\" (want struct)", 3}},
|
||||
},
|
||||
{
|
||||
config: "users:\n - 4",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"users[0]\" (want struct)", 2}},
|
||||
},
|
||||
{
|
||||
config: "users:\n - bad",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"users[0]\" (want struct)", 2}},
|
||||
},
|
||||
{
|
||||
config: "users:\n - - bad",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"users[0]\" (want struct)", 2}},
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
r := Report{}
|
||||
n, err := parseCloudConfig([]byte(tt.config), &r)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
checkStructure(n, &r)
|
||||
|
||||
if e := r.Entries(); !reflect.DeepEqual(tt.entries, e) {
|
||||
t.Errorf("bad report (%d, %q): want %#v, got %#v", i, tt.config, tt.entries, e)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestCheckValidity(t *testing.T) {
|
||||
tests := []struct {
|
||||
config string
|
||||
|
||||
entries []Entry
|
||||
}{
|
||||
// string
|
||||
{
|
||||
config: "hostname: test",
|
||||
},
|
||||
|
||||
// int
|
||||
{
|
||||
config: "coreos:\n fleet:\n verbosity: 2",
|
||||
},
|
||||
|
||||
// bool
|
||||
{
|
||||
config: "coreos:\n units:\n - enable: true",
|
||||
},
|
||||
|
||||
// slice
|
||||
{
|
||||
config: "coreos:\n units:\n - command: start\n - name: stop",
|
||||
},
|
||||
{
|
||||
config: "coreos:\n units:\n - command: lol",
|
||||
entries: []Entry{{entryError, "invalid value lol", 3}},
|
||||
},
|
||||
|
||||
// struct
|
||||
{
|
||||
config: "coreos:\n update:\n reboot_strategy: off",
|
||||
},
|
||||
{
|
||||
config: "coreos:\n update:\n reboot_strategy: always",
|
||||
entries: []Entry{{entryError, "invalid value always", 3}},
|
||||
},
|
||||
|
||||
// unknown
|
||||
{
|
||||
config: "unknown: hi",
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
r := Report{}
|
||||
n, err := parseCloudConfig([]byte(tt.config), &r)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
checkValidity(n, &r)
|
||||
|
||||
if e := r.Entries(); !reflect.DeepEqual(tt.entries, e) {
|
||||
t.Errorf("bad report (%d, %q): want %#v, got %#v", i, tt.config, tt.entries, e)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestCheckWriteFiles(t *testing.T) {
|
||||
tests := []struct {
|
||||
config string
|
||||
|
||||
entries []Entry
|
||||
}{
|
||||
{},
|
||||
{
|
||||
config: "write_files:\n - path: /valid",
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - path: /tmp/usr/valid",
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - path: /usr/invalid",
|
||||
entries: []Entry{{entryError, "file cannot be written to a read-only filesystem", 2}},
|
||||
},
|
||||
{
|
||||
config: "write-files:\n - path: /tmp/../usr/invalid",
|
||||
entries: []Entry{{entryError, "file cannot be written to a read-only filesystem", 2}},
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
r := Report{}
|
||||
n, err := parseCloudConfig([]byte(tt.config), &r)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
checkWriteFiles(n, &r)
|
||||
|
||||
if e := r.Entries(); !reflect.DeepEqual(tt.entries, e) {
|
||||
t.Errorf("bad report (%d, %q): want %#v, got %#v", i, tt.config, tt.entries, e)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestCheckWriteFilesUnderCoreos(t *testing.T) {
|
||||
tests := []struct {
|
||||
config string
|
||||
|
||||
entries []Entry
|
||||
}{
|
||||
{},
|
||||
{
|
||||
config: "write_files:\n - path: /hi",
|
||||
},
|
||||
{
|
||||
config: "coreos:\n write_files:\n - path: /hi",
|
||||
entries: []Entry{{entryInfo, "write_files doesn't belong under coreos", 2}},
|
||||
},
|
||||
{
|
||||
config: "coreos:\n write-files:\n - path: /hyphen",
|
||||
entries: []Entry{{entryInfo, "write_files doesn't belong under coreos", 2}},
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
r := Report{}
|
||||
n, err := parseCloudConfig([]byte(tt.config), &r)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
checkWriteFilesUnderCoreos(n, &r)
|
||||
|
||||
if e := r.Entries(); !reflect.DeepEqual(tt.entries, e) {
|
||||
t.Errorf("bad report (%d, %q): want %#v, got %#v", i, tt.config, tt.entries, e)
|
||||
}
|
||||
}
|
||||
}
|
@ -1,164 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package validate
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/config"
|
||||
|
||||
yaml "gopkg.in/yaml.v2"
|
||||
)
|
||||
|
||||
var (
|
||||
yamlLineError = regexp.MustCompile(`^YAML error: line (?P<line>[[:digit:]]+): (?P<msg>.*)$`)
|
||||
yamlError = regexp.MustCompile(`^YAML error: (?P<msg>.*)$`)
|
||||
)
|
||||
|
||||
// Validate runs a series of validation tests against the given userdata and
|
||||
// returns a report detailing all of the issues. Presently, only cloud-configs
|
||||
// can be validated.
|
||||
func Validate(userdataBytes []byte) (Report, error) {
|
||||
switch {
|
||||
case len(userdataBytes) == 0:
|
||||
return Report{}, nil
|
||||
case config.IsScript(string(userdataBytes)):
|
||||
return Report{}, nil
|
||||
case config.IsIgnitionConfig(string(userdataBytes)):
|
||||
return Report{}, nil
|
||||
case config.IsCloudConfig(string(userdataBytes)):
|
||||
return validateCloudConfig(userdataBytes, Rules)
|
||||
default:
|
||||
return Report{entries: []Entry{
|
||||
Entry{kind: entryError, message: `must be "#cloud-config" or begin with "#!"`, line: 1},
|
||||
}}, nil
|
||||
}
|
||||
}
|
||||
|
||||
// validateCloudConfig runs all of the validation rules in Rules and returns
|
||||
// the resulting report and any errors encountered.
|
||||
func validateCloudConfig(config []byte, rules []rule) (report Report, err error) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
err = fmt.Errorf("%v", r)
|
||||
}
|
||||
}()
|
||||
|
||||
c, err := parseCloudConfig(config, &report)
|
||||
if err != nil {
|
||||
return report, err
|
||||
}
|
||||
|
||||
for _, r := range rules {
|
||||
r(c, &report)
|
||||
}
|
||||
return report, nil
|
||||
}
|
||||
|
||||
// parseCloudConfig parses the provided config into a node structure and logs
|
||||
// any parsing issues into the provided report. Unrecoverable errors are
|
||||
// returned as an error.
|
||||
func parseCloudConfig(cfg []byte, report *Report) (node, error) {
|
||||
// yaml.UnmarshalMappingKeyTransform = func(nameIn string) (nameOut string) {
|
||||
// return nameIn
|
||||
// }
|
||||
// unmarshal the config into an implicitly-typed form. The yaml library
|
||||
// will implicitly convert types into their normalized form
|
||||
// (e.g. 0744 -> 484, off -> false).
|
||||
var weak map[interface{}]interface{}
|
||||
if err := yaml.Unmarshal(cfg, &weak); err != nil {
|
||||
matches := yamlLineError.FindStringSubmatch(err.Error())
|
||||
if len(matches) == 3 {
|
||||
line, err := strconv.Atoi(matches[1])
|
||||
if err != nil {
|
||||
return node{}, err
|
||||
}
|
||||
msg := matches[2]
|
||||
report.Error(line, msg)
|
||||
return node{}, nil
|
||||
}
|
||||
|
||||
matches = yamlError.FindStringSubmatch(err.Error())
|
||||
if len(matches) == 2 {
|
||||
report.Error(1, matches[1])
|
||||
return node{}, nil
|
||||
}
|
||||
|
||||
return node{}, errors.New("couldn't parse yaml error")
|
||||
}
|
||||
w := NewNode(weak, NewContext(cfg))
|
||||
w = normalizeNodeNames(w, report)
|
||||
|
||||
// unmarshal the config into the explicitly-typed form.
|
||||
// yaml.UnmarshalMappingKeyTransform = func(nameIn string) (nameOut string) {
|
||||
// return strings.Replace(nameIn, "-", "_", -1)
|
||||
// }
|
||||
var strong config.CloudConfig
|
||||
if err := yaml.Unmarshal([]byte(cfg), &strong); err != nil {
|
||||
return node{}, err
|
||||
}
|
||||
s := NewNode(strong, NewContext(cfg))
|
||||
|
||||
// coerceNodes weak nodes and strong nodes. strong nodes replace weak nodes
|
||||
// if they are compatible types (this happens when the yaml library
|
||||
// converts the input).
|
||||
// (e.g. weak 484 is replaced by strong 0744, weak 4 is not replaced by
|
||||
// strong false)
|
||||
return coerceNodes(w, s), nil
|
||||
}
|
||||
|
||||
// coerceNodes recursively evaluates two nodes, returning a new node containing
|
||||
// either the weak or strong node's value and its recursively processed
|
||||
// children. The strong node's value is used if the two nodes are leafs, are
|
||||
// both valid, and are compatible types (defined by isCompatible()). The weak
|
||||
// node is returned in all other cases. coerceNodes is used to counteract the
|
||||
// effects of yaml's automatic type conversion. The weak node is the one
|
||||
// resulting from unmarshalling into an empty interface{} (the type is
|
||||
// inferred). The strong node is the one resulting from unmarshalling into a
|
||||
// struct. If the two nodes are of compatible types, the yaml library correctly
|
||||
// parsed the value into the strongly typed unmarshalling. In this case, we
|
||||
// prefer the strong node because its actually the type we are expecting.
|
||||
func coerceNodes(w, s node) node {
|
||||
n := w
|
||||
n.children = nil
|
||||
if len(w.children) == 0 && len(s.children) == 0 &&
|
||||
w.IsValid() && s.IsValid() &&
|
||||
isCompatible(w.Kind(), s.Kind()) {
|
||||
n.Value = s.Value
|
||||
}
|
||||
|
||||
for _, cw := range w.children {
|
||||
n.children = append(n.children, coerceNodes(cw, s.Child(cw.name)))
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
// normalizeNodeNames replaces all occurences of '-' with '_' within key names
|
||||
// and makes a note of each replacement in the report.
|
||||
func normalizeNodeNames(node node, report *Report) node {
|
||||
if strings.Contains(node.name, "-") {
|
||||
// TODO(crawford): Enable this message once the new validator hits stable.
|
||||
//report.Info(node.line, fmt.Sprintf("%q uses '-' instead of '_'", node.name))
|
||||
node.name = strings.Replace(node.name, "-", "_", -1)
|
||||
}
|
||||
for i := range node.children {
|
||||
node.children[i] = normalizeNodeNames(node.children[i], report)
|
||||
}
|
||||
return node
|
||||
}
|
@ -1,177 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package validate
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestParseCloudConfig(t *testing.T) {
|
||||
tests := []struct {
|
||||
config string
|
||||
|
||||
entries []Entry
|
||||
}{
|
||||
{},
|
||||
{
|
||||
config: " ",
|
||||
entries: []Entry{{entryError, "found character that cannot start any token", 1}},
|
||||
},
|
||||
{
|
||||
config: "a:\na",
|
||||
entries: []Entry{{entryError, "could not find expected ':'", 2}},
|
||||
},
|
||||
{
|
||||
config: "#hello\na:\na",
|
||||
entries: []Entry{{entryError, "could not find expected ':'", 3}},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
r := Report{}
|
||||
parseCloudConfig([]byte(tt.config), &r)
|
||||
|
||||
if e := r.Entries(); !reflect.DeepEqual(tt.entries, e) {
|
||||
t.Errorf("bad report (%s): want %#v, got %#v", tt.config, tt.entries, e)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidateCloudConfig(t *testing.T) {
|
||||
tests := []struct {
|
||||
config string
|
||||
rules []rule
|
||||
|
||||
report Report
|
||||
err error
|
||||
}{
|
||||
{
|
||||
rules: []rule{func(_ node, _ *Report) { panic("something happened") }},
|
||||
err: errors.New("something happened"),
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - permissions: 0744",
|
||||
rules: Rules,
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - permissions: '0744'",
|
||||
rules: Rules,
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - permissions: 744",
|
||||
rules: Rules,
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - permissions: '744'",
|
||||
rules: Rules,
|
||||
},
|
||||
{
|
||||
config: "coreos:\n update:\n reboot-strategy: off",
|
||||
rules: Rules,
|
||||
},
|
||||
{
|
||||
config: "coreos:\n update:\n reboot-strategy: false",
|
||||
rules: Rules,
|
||||
report: Report{entries: []Entry{{entryError, "invalid value false", 3}}},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
r, err := validateCloudConfig([]byte(tt.config), tt.rules)
|
||||
if !reflect.DeepEqual(tt.err, err) {
|
||||
t.Errorf("bad error (%s): want %v, got %v", tt.config, tt.err, err)
|
||||
}
|
||||
if !reflect.DeepEqual(tt.report, r) {
|
||||
t.Errorf("bad report (%s): want %+v, got %+v", tt.config, tt.report, r)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidate(t *testing.T) {
|
||||
tests := []struct {
|
||||
config string
|
||||
|
||||
report Report
|
||||
}{
|
||||
{},
|
||||
{
|
||||
config: "#!/bin/bash\necho hey",
|
||||
},
|
||||
{
|
||||
config: "{}",
|
||||
report: Report{entries: []Entry{{entryError, `must be "#cloud-config" or begin with "#!"`, 1}}},
|
||||
},
|
||||
{
|
||||
config: `{"ignitionVersion":0}`,
|
||||
},
|
||||
{
|
||||
config: `{"ignitionVersion":1}`,
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
r, err := Validate([]byte(tt.config))
|
||||
if err != nil {
|
||||
t.Errorf("bad error (case #%d): want %v, got %v", i, nil, err)
|
||||
}
|
||||
if !reflect.DeepEqual(tt.report, r) {
|
||||
t.Errorf("bad report (case #%d): want %+v, got %+v", i, tt.report, r)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkValidate(b *testing.B) {
|
||||
config := `#cloud-config
|
||||
hostname: test
|
||||
|
||||
coreos:
|
||||
etcd:
|
||||
name: node001
|
||||
discovery: https://discovery.etcd.io/disco
|
||||
addr: $public_ipv4:4001
|
||||
peer-addr: $private_ipv4:7001
|
||||
fleet:
|
||||
verbosity: 2
|
||||
metadata: "hi"
|
||||
update:
|
||||
reboot-strategy: off
|
||||
units:
|
||||
- name: hi.service
|
||||
command: start
|
||||
enable: true
|
||||
- name: bye.service
|
||||
command: stop
|
||||
|
||||
ssh_authorized_keys:
|
||||
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h...
|
||||
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h...
|
||||
|
||||
users:
|
||||
- name: me
|
||||
|
||||
write_files:
|
||||
- path: /etc/yes
|
||||
content: "Hi"
|
||||
|
||||
manage_etc_hosts: localhost`
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
if _, err := Validate([]byte(config)); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
}
|
@ -1,444 +1,95 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"flag"
|
||||
"fmt"
|
||||
"flag"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"os"
|
||||
"runtime"
|
||||
"sync"
|
||||
"time"
|
||||
"log"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/config"
|
||||
"github.com/coreos/coreos-cloudinit/config/validate"
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/configdrive"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/file"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata/openstack"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata/digitalocean"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata/ec2"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata/packet"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/proc_cmdline"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/url"
|
||||
|
||||
// "github.com/coreos/coreos-cloudinit/datasource/vmware"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/waagent"
|
||||
"github.com/coreos/coreos-cloudinit/initialize"
|
||||
"github.com/coreos/coreos-cloudinit/network"
|
||||
"github.com/coreos/coreos-cloudinit/pkg"
|
||||
"github.com/coreos/coreos-cloudinit/system"
|
||||
"github.com/coreos/coreos-cloudinit/cloudinit"
|
||||
)
|
||||
|
||||
var (
|
||||
datasourceInterval = 100 * time.Millisecond
|
||||
datasourceMaxInterval = 30 * time.Second
|
||||
datasourceTimeout = 5 * time.Minute
|
||||
)
|
||||
|
||||
var (
|
||||
flags = struct {
|
||||
printVersion bool
|
||||
ignoreFailure bool
|
||||
sources struct {
|
||||
file string
|
||||
configDrive string
|
||||
waagent string
|
||||
metadataService bool
|
||||
ec2MetadataService string
|
||||
openstackMetadataService string
|
||||
// cloudSigmaMetadataService bool
|
||||
digitalOceanMetadataService string
|
||||
packetMetadataService string
|
||||
url string
|
||||
procCmdLine bool
|
||||
// vmware bool
|
||||
}
|
||||
convertNetconf string
|
||||
workspace string
|
||||
sshKeyName string
|
||||
oem string
|
||||
validate bool
|
||||
timeout string
|
||||
dstimeout string
|
||||
}{}
|
||||
version = "was not built properly"
|
||||
)
|
||||
|
||||
func init() {
|
||||
flag.BoolVar(&flags.printVersion, "version", false, "Print the version and exit")
|
||||
flag.BoolVar(&flags.ignoreFailure, "ignore-failure", false, "Exits with 0 status in the event of malformed input from user-data")
|
||||
flag.StringVar(&flags.sources.file, "from-file", "", "Read user-data from provided file")
|
||||
flag.StringVar(&flags.sources.configDrive, "from-configdrive", "", "Read data from provided cloud-drive directory")
|
||||
flag.StringVar(&flags.sources.waagent, "from-waagent", "", "Read data from provided waagent directory")
|
||||
flag.StringVar(&flags.sources.ec2MetadataService, "from-ec2-metadata", "", "Download EC2 data from the provided url")
|
||||
// flag.BoolVar(&flags.sources.cloudSigmaMetadataService, "from-cloudsigma-metadata", false, "Download data from CloudSigma server context")
|
||||
flag.StringVar(&flags.sources.digitalOceanMetadataService, "from-digitalocean-metadata", "", "Download DigitalOcean data from the provided url")
|
||||
flag.StringVar(&flags.sources.openstackMetadataService, "from-openstack-metadata", "", "Download OpenStack data from the provided url")
|
||||
flag.StringVar(&flags.sources.packetMetadataService, "from-packet-metadata", "", "Download Packet data from metadata service")
|
||||
flag.StringVar(&flags.sources.url, "from-url", "", "Download user-data from provided url")
|
||||
flag.BoolVar(&flags.sources.procCmdLine, "from-proc-cmdline", false, fmt.Sprintf("Parse %s for '%s=<url>', using the cloud-config served by an HTTP GET to <url>", proc_cmdline.ProcCmdlineLocation, proc_cmdline.ProcCmdlineCloudConfigFlag))
|
||||
// flag.BoolVar(&flags.sources.vmware, "from-vmware-guestinfo", false, "Read data from VMware guestinfo")
|
||||
flag.StringVar(&flags.oem, "oem", "", "Use the settings specific to the provided OEM")
|
||||
flag.StringVar(&flags.convertNetconf, "convert-netconf", "", "Read the network config provided in cloud-drive and translate it from the specified format into networkd unit files")
|
||||
flag.StringVar(&flags.workspace, "workspace", "/var/lib/cloudinit", "Base directory where cloudinit should use to store data")
|
||||
flag.StringVar(&flags.sshKeyName, "ssh-key-name", initialize.DefaultSSHKeyName, "Add SSH keys to the system with the given name")
|
||||
flag.BoolVar(&flags.validate, "validate", false, "[EXPERIMENTAL] Validate the user-data but do not apply it to the system")
|
||||
flag.StringVar(&flags.timeout, "timeout", "60s", "Timeout to wait for all datasource metadata")
|
||||
flag.StringVar(&flags.dstimeout, "dstimeout", "10s", "Timeout to wait for single datasource metadata")
|
||||
}
|
||||
|
||||
type oemConfig map[string]string
|
||||
|
||||
var (
|
||||
oemConfigs = map[string]oemConfig{
|
||||
"digitalocean": oemConfig{
|
||||
"from-digitalocean-metadata": "http://169.254.169.254/",
|
||||
"convert-netconf": "digitalocean",
|
||||
},
|
||||
"ec2-compat": oemConfig{
|
||||
"from-ec2-metadata": "http://169.254.169.254/",
|
||||
"from-configdrive": "/media/configdrive",
|
||||
},
|
||||
"rackspace-onmetal": oemConfig{
|
||||
"from-configdrive": "/media/configdrive",
|
||||
"convert-netconf": "debian",
|
||||
},
|
||||
"azure": oemConfig{
|
||||
"from-waagent": "/var/lib/waagent",
|
||||
},
|
||||
// "cloudsigma": oemConfig{
|
||||
// "from-cloudsigma-metadata": "true",
|
||||
// },
|
||||
"packet": oemConfig{
|
||||
"from-packet-metadata": "https://metadata.packet.net/",
|
||||
},
|
||||
// "vmware": oemConfig{
|
||||
// "from-vmware-guestinfo": "true",
|
||||
// "convert-netconf": "vmware",
|
||||
// },
|
||||
}
|
||||
)
|
||||
const version = "0.1.2+git"
|
||||
|
||||
func main() {
|
||||
var userdata []byte
|
||||
var err error
|
||||
failure := false
|
||||
|
||||
// Conservative Go 1.5 upgrade strategy:
|
||||
// keep GOMAXPROCS' default at 1 for now.
|
||||
if os.Getenv("GOMAXPROCS") == "" {
|
||||
runtime.GOMAXPROCS(1)
|
||||
}
|
||||
var printVersion bool
|
||||
flag.BoolVar(&printVersion, "version", false, "Print the version and exit")
|
||||
|
||||
var file string
|
||||
flag.StringVar(&file, "from-file", "", "Read user-data from provided file")
|
||||
|
||||
var url string
|
||||
flag.StringVar(&url, "from-url", "", "Download user-data from provided url")
|
||||
|
||||
var workspace string
|
||||
flag.StringVar(&workspace, "workspace", "/var/lib/coreos-cloudinit", "Base directory coreos-cloudinit should use to store data")
|
||||
|
||||
var sshKeyName string
|
||||
flag.StringVar(&sshKeyName, "ssh-key-name", cloudinit.DefaultSSHKeyName, "Add SSH keys to the system with the given name")
|
||||
|
||||
flag.Parse()
|
||||
|
||||
if c, ok := oemConfigs[flags.oem]; ok {
|
||||
for k, v := range c {
|
||||
flag.Set(k, v)
|
||||
}
|
||||
} else if flags.oem != "" {
|
||||
oems := make([]string, 0, len(oemConfigs))
|
||||
for k := range oemConfigs {
|
||||
oems = append(oems, k)
|
||||
}
|
||||
fmt.Printf("Invalid option to -oem: %q. Supported options: %q\n", flags.oem, oems)
|
||||
os.Exit(2)
|
||||
}
|
||||
|
||||
if flags.printVersion == true {
|
||||
fmt.Printf("coreos-cloudinit %s\n", version)
|
||||
if printVersion == true {
|
||||
fmt.Printf("coreos-cloudinit version %s\n", version)
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
datasourceTimeout, err = time.ParseDuration(flags.timeout)
|
||||
if err != nil {
|
||||
fmt.Printf("Invalid value to --timeout: %q\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
datasourceMaxInterval, err = time.ParseDuration(flags.dstimeout)
|
||||
if err != nil {
|
||||
fmt.Printf("Invalid value to --dstimeout: %q\n", err)
|
||||
if file != "" && url != "" {
|
||||
fmt.Println("Provide one of --from-file or --from-url")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
switch flags.convertNetconf {
|
||||
case "":
|
||||
case "debian":
|
||||
case "digitalocean":
|
||||
case "packet":
|
||||
// case "vmware":
|
||||
default:
|
||||
fmt.Printf("Invalid option to -convert-netconf: '%s'. Supported options: 'debian, digitalocean, packet, vmware'\n", flags.convertNetconf)
|
||||
os.Exit(2)
|
||||
}
|
||||
|
||||
dss := getDatasources()
|
||||
if len(dss) == 0 {
|
||||
fmt.Println("Provide at least one of --from-file, --from-configdrive, --from-openstack-metadata, --from-ec2-metadata, --from-cloudsigma-metadata, --from-packet-metadata, --from-digitalocean-metadata, --from-vmware-guestinfo, --from-waagent, --from-url or --from-proc-cmdline")
|
||||
os.Exit(2)
|
||||
}
|
||||
fmt.Printf("%#+v\n", dss)
|
||||
ds := selectDatasource(dss)
|
||||
if ds == nil {
|
||||
log.Println("No datasources available in time")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
log.Printf("Fetching user-data from datasource of type %q\n", ds.Type())
|
||||
userdataBytes, err := ds.FetchUserdata()
|
||||
if err != nil {
|
||||
log.Printf("Failed fetching user-data from datasource: %v. Continuing...\n", err)
|
||||
failure = true
|
||||
}
|
||||
userdataBytes, err = decompressIfGzip(userdataBytes)
|
||||
if err != nil {
|
||||
log.Printf("Failed decompressing user-data from datasource: %v. Continuing...\n", err)
|
||||
failure = true
|
||||
}
|
||||
|
||||
if report, err := validate.Validate(userdataBytes); err == nil {
|
||||
ret := 0
|
||||
for _, e := range report.Entries() {
|
||||
log.Println(e)
|
||||
ret = 1
|
||||
if file != "" {
|
||||
log.Printf("Reading user-data from file: %s", file)
|
||||
userdata, err = ioutil.ReadFile(file)
|
||||
if err != nil {
|
||||
log.Fatal(err.Error())
|
||||
}
|
||||
if flags.validate {
|
||||
os.Exit(ret)
|
||||
} else if url != "" {
|
||||
log.Printf("Reading user-data from metadata service")
|
||||
svc := cloudinit.NewMetadataService(url)
|
||||
userdata, err = svc.UserData()
|
||||
if err != nil {
|
||||
log.Fatal(err.Error())
|
||||
}
|
||||
} else {
|
||||
log.Printf("Failed while validating user_data (%q)\n", err)
|
||||
if flags.validate {
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
log.Printf("Fetching meta-data from datasource of type %q\n", ds.Type())
|
||||
metadata, err := ds.FetchMetadata()
|
||||
if err != nil {
|
||||
log.Printf("Failed fetching meta-data from datasource: %v\n", err)
|
||||
fmt.Println("Provide one of --from-file or --from-url")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Apply environment to user-data
|
||||
env := initialize.NewEnvironment("/", ds.ConfigRoot(), flags.workspace, flags.sshKeyName, metadata)
|
||||
userdata := env.Apply(string(userdataBytes))
|
||||
|
||||
var ccu *config.CloudConfig
|
||||
var script *config.Script
|
||||
switch ud, err := initialize.ParseUserData(userdata); err {
|
||||
case initialize.ErrIgnitionConfig:
|
||||
fmt.Printf("Detected an Ignition config. Exiting...")
|
||||
if len(userdata) == 0 {
|
||||
log.Printf("No user data to handle, exiting.")
|
||||
os.Exit(0)
|
||||
case nil:
|
||||
switch t := ud.(type) {
|
||||
case *config.CloudConfig:
|
||||
ccu = t
|
||||
case *config.Script:
|
||||
script = t
|
||||
}
|
||||
default:
|
||||
fmt.Printf("Failed to parse user-data: %v\nContinuing...\n", err)
|
||||
failure = true
|
||||
}
|
||||
|
||||
log.Println("Merging cloud-config from meta-data and user-data")
|
||||
cc := mergeConfigs(ccu, metadata)
|
||||
|
||||
var ifaces []network.InterfaceGenerator
|
||||
if flags.convertNetconf != "" {
|
||||
var err error
|
||||
switch flags.convertNetconf {
|
||||
case "debian":
|
||||
ifaces, err = network.ProcessDebianNetconf(metadata.NetworkConfig.([]byte))
|
||||
case "digitalocean":
|
||||
ifaces, err = network.ProcessDigitalOceanNetconf(metadata.NetworkConfig.(digitalocean.Metadata))
|
||||
case "packet":
|
||||
ifaces, err = network.ProcessPacketNetconf(metadata.NetworkConfig.(packet.NetworkData))
|
||||
// case "vmware":
|
||||
// ifaces, err = network.ProcessVMwareNetconf(metadata.NetworkConfig.(map[string]string))
|
||||
default:
|
||||
err = fmt.Errorf("Unsupported network config format %q", flags.convertNetconf)
|
||||
}
|
||||
if err != nil {
|
||||
log.Printf("Failed to generate interfaces: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
if err = initialize.Apply(cc, ifaces, env); err != nil {
|
||||
log.Printf("Failed to apply cloud-config: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if script != nil {
|
||||
if err = runScript(*script, env); err != nil {
|
||||
log.Printf("Failed to run script: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
if failure && !flags.ignoreFailure {
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
// mergeConfigs merges certain options from md (meta-data from the datasource)
|
||||
// onto cc (a CloudConfig derived from user-data), if they are not already set
|
||||
// on cc (i.e. user-data always takes precedence)
|
||||
func mergeConfigs(cc *config.CloudConfig, md datasource.Metadata) (out config.CloudConfig) {
|
||||
if cc != nil {
|
||||
out = *cc
|
||||
}
|
||||
|
||||
if md.Hostname != "" {
|
||||
if out.Hostname != "" {
|
||||
log.Printf("Warning: user-data hostname (%s) overrides metadata hostname (%s)\n", out.Hostname, md.Hostname)
|
||||
} else {
|
||||
out.Hostname = md.Hostname
|
||||
}
|
||||
}
|
||||
for _, key := range md.SSHPublicKeys {
|
||||
out.SSHAuthorizedKeys = append(out.SSHAuthorizedKeys, key)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// getDatasources creates a slice of possible Datasources for cloudinit based
|
||||
// on the different source command-line flags.
|
||||
func getDatasources() []datasource.Datasource {
|
||||
dss := make([]datasource.Datasource, 0, 5)
|
||||
if flags.sources.file != "" {
|
||||
dss = append(dss, file.NewDatasource(flags.sources.file))
|
||||
}
|
||||
if flags.sources.url != "" {
|
||||
dss = append(dss, url.NewDatasource(flags.sources.url))
|
||||
}
|
||||
if flags.sources.configDrive != "" {
|
||||
dss = append(dss, configdrive.NewDatasource(flags.sources.configDrive))
|
||||
}
|
||||
if flags.sources.metadataService {
|
||||
dss = append(dss, ec2.NewDatasource(ec2.DefaultAddress))
|
||||
}
|
||||
if flags.sources.openstackMetadataService != "" {
|
||||
dss = append(dss, openstack.NewDatasource(flags.sources.openstackMetadataService))
|
||||
}
|
||||
if flags.sources.ec2MetadataService != "" {
|
||||
dss = append(dss, ec2.NewDatasource(flags.sources.ec2MetadataService))
|
||||
}
|
||||
// if flags.sources.cloudSigmaMetadataService {
|
||||
// dss = append(dss, cloudsigma.NewServerContextService())
|
||||
// }
|
||||
if flags.sources.digitalOceanMetadataService != "" {
|
||||
dss = append(dss, digitalocean.NewDatasource(flags.sources.digitalOceanMetadataService))
|
||||
}
|
||||
if flags.sources.waagent != "" {
|
||||
dss = append(dss, waagent.NewDatasource(flags.sources.waagent))
|
||||
}
|
||||
if flags.sources.packetMetadataService != "" {
|
||||
dss = append(dss, packet.NewDatasource(flags.sources.packetMetadataService))
|
||||
}
|
||||
if flags.sources.procCmdLine {
|
||||
dss = append(dss, proc_cmdline.NewDatasource())
|
||||
}
|
||||
// if flags.sources.vmware {
|
||||
// dss = append(dss, vmware.NewDatasource())
|
||||
// }
|
||||
return dss
|
||||
}
|
||||
|
||||
// selectDatasource attempts to choose a valid Datasource to use based on its
|
||||
// current availability. The first Datasource to report to be available is
|
||||
// returned. Datasources will be retried if possible if they are not
|
||||
// immediately available. If all Datasources are permanently unavailable or
|
||||
// datasourceTimeout is reached before one becomes available, nil is returned.
|
||||
func selectDatasource(sources []datasource.Datasource) datasource.Datasource {
|
||||
ds := make(chan datasource.Datasource)
|
||||
stop := make(chan struct{})
|
||||
var wg sync.WaitGroup
|
||||
|
||||
for _, s := range sources {
|
||||
wg.Add(1)
|
||||
go func(s datasource.Datasource) {
|
||||
defer wg.Done()
|
||||
|
||||
duration := datasourceInterval
|
||||
for {
|
||||
log.Printf("Checking availability of %q\n", s.Type())
|
||||
if s.IsAvailable() {
|
||||
ds <- s
|
||||
return
|
||||
} else if !s.AvailabilityChanges() {
|
||||
return
|
||||
}
|
||||
select {
|
||||
case <-stop:
|
||||
return
|
||||
case <-time.After(duration):
|
||||
duration = pkg.ExpBackoff(duration, datasourceMaxInterval)
|
||||
}
|
||||
}
|
||||
}(s)
|
||||
}
|
||||
|
||||
done := make(chan struct{})
|
||||
go func() {
|
||||
wg.Wait()
|
||||
close(done)
|
||||
}()
|
||||
|
||||
var s datasource.Datasource
|
||||
select {
|
||||
case s = <-ds:
|
||||
case <-done:
|
||||
case <-time.After(datasourceTimeout):
|
||||
}
|
||||
|
||||
close(stop)
|
||||
return s
|
||||
}
|
||||
|
||||
// TODO(jonboulle): this should probably be refactored and moved into a different module
|
||||
func runScript(script config.Script, env *initialize.Environment) error {
|
||||
err := initialize.PrepWorkspace(env.Workspace())
|
||||
parsed, err := cloudinit.ParseUserData(userdata)
|
||||
if err != nil {
|
||||
log.Printf("Failed preparing workspace: %v\n", err)
|
||||
return err
|
||||
log.Fatalf("Failed parsing user-data: %v", err)
|
||||
}
|
||||
path, err := initialize.PersistScriptInWorkspace(script, env.Workspace())
|
||||
if err == nil {
|
||||
var name string
|
||||
name, err = system.ExecuteScript(path)
|
||||
initialize.PersistUnitNameInWorkspace(name, env.Workspace())
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
const gzipMagicBytes = "\x1f\x8b"
|
||||
|
||||
func decompressIfGzip(userdataBytes []byte) ([]byte, error) {
|
||||
if !bytes.HasPrefix(userdataBytes, []byte(gzipMagicBytes)) {
|
||||
return userdataBytes, nil
|
||||
}
|
||||
gzr, err := gzip.NewReader(bytes.NewReader(userdataBytes))
|
||||
err = cloudinit.PrepWorkspace(workspace)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
log.Fatalf("Failed preparing workspace: %v", err)
|
||||
}
|
||||
|
||||
switch t := parsed.(type) {
|
||||
case cloudinit.CloudConfig:
|
||||
err = cloudinit.ApplyCloudConfig(t, sshKeyName)
|
||||
case cloudinit.Script:
|
||||
var path string
|
||||
path, err = cloudinit.PersistScriptInWorkspace(t, workspace)
|
||||
if err == nil {
|
||||
var name string
|
||||
name, err = cloudinit.ExecuteScript(path)
|
||||
cloudinit.PersistScriptUnitNameInWorkspace(name, workspace)
|
||||
}
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
log.Fatalf("Failed resolving user-data: %v", err)
|
||||
}
|
||||
defer gzr.Close()
|
||||
return ioutil.ReadAll(gzr)
|
||||
}
|
||||
|
@ -1,147 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/config"
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
)
|
||||
|
||||
func TestMergeConfigs(t *testing.T) {
|
||||
tests := []struct {
|
||||
cc *config.CloudConfig
|
||||
md datasource.Metadata
|
||||
|
||||
out config.CloudConfig
|
||||
}{
|
||||
{
|
||||
// If md is empty and cc is nil, result should be empty
|
||||
out: config.CloudConfig{},
|
||||
},
|
||||
{
|
||||
// If md and cc are empty, result should be empty
|
||||
cc: &config.CloudConfig{},
|
||||
out: config.CloudConfig{},
|
||||
},
|
||||
{
|
||||
// If cc is empty, cc should be returned unchanged
|
||||
cc: &config.CloudConfig{SSHAuthorizedKeys: []string{"abc", "def"}, Hostname: "cc-host"},
|
||||
out: config.CloudConfig{SSHAuthorizedKeys: []string{"abc", "def"}, Hostname: "cc-host"},
|
||||
},
|
||||
{
|
||||
// If cc is empty, cc should be returned unchanged(overridden)
|
||||
cc: &config.CloudConfig{},
|
||||
md: datasource.Metadata{Hostname: "md-host", SSHPublicKeys: map[string]string{"key": "ghi"}},
|
||||
out: config.CloudConfig{SSHAuthorizedKeys: []string{"ghi"}, Hostname: "md-host"},
|
||||
},
|
||||
{
|
||||
// If cc is nil, cc should be returned unchanged(overridden)
|
||||
md: datasource.Metadata{Hostname: "md-host", SSHPublicKeys: map[string]string{"key": "ghi"}},
|
||||
out: config.CloudConfig{SSHAuthorizedKeys: []string{"ghi"}, Hostname: "md-host"},
|
||||
},
|
||||
{
|
||||
// user-data should override completely in the case of conflicts
|
||||
cc: &config.CloudConfig{SSHAuthorizedKeys: []string{"abc", "def"}, Hostname: "cc-host"},
|
||||
md: datasource.Metadata{Hostname: "md-host"},
|
||||
out: config.CloudConfig{SSHAuthorizedKeys: []string{"abc", "def"}, Hostname: "cc-host"},
|
||||
},
|
||||
{
|
||||
// Mixed merge should succeed
|
||||
cc: &config.CloudConfig{SSHAuthorizedKeys: []string{"abc", "def"}, Hostname: "cc-host"},
|
||||
md: datasource.Metadata{Hostname: "md-host", SSHPublicKeys: map[string]string{"key": "ghi"}},
|
||||
out: config.CloudConfig{SSHAuthorizedKeys: []string{"abc", "def", "ghi"}, Hostname: "cc-host"},
|
||||
},
|
||||
{
|
||||
// Completely non-conflicting merge should be fine
|
||||
cc: &config.CloudConfig{Hostname: "cc-host"},
|
||||
md: datasource.Metadata{SSHPublicKeys: map[string]string{"zaphod": "beeblebrox"}},
|
||||
out: config.CloudConfig{Hostname: "cc-host", SSHAuthorizedKeys: []string{"beeblebrox"}},
|
||||
},
|
||||
{
|
||||
// Non-mergeable settings in user-data should not be affected
|
||||
cc: &config.CloudConfig{Hostname: "cc-host", ManageEtcHosts: config.EtcHosts("lolz")},
|
||||
md: datasource.Metadata{Hostname: "md-host"},
|
||||
out: config.CloudConfig{Hostname: "cc-host", ManageEtcHosts: config.EtcHosts("lolz")},
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
out := mergeConfigs(tt.cc, tt.md)
|
||||
if !reflect.DeepEqual(tt.out, out) {
|
||||
t.Errorf("bad config (%d): want %#v, got %#v", i, tt.out, out)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func mustDecode(in string) []byte {
|
||||
out, err := base64.StdEncoding.DecodeString(in)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func TestDecompressIfGzip(t *testing.T) {
|
||||
tests := []struct {
|
||||
in []byte
|
||||
|
||||
out []byte
|
||||
err error
|
||||
}{
|
||||
{
|
||||
in: nil,
|
||||
|
||||
out: nil,
|
||||
err: nil,
|
||||
},
|
||||
{
|
||||
in: []byte{},
|
||||
|
||||
out: []byte{},
|
||||
err: nil,
|
||||
},
|
||||
{
|
||||
in: mustDecode("H4sIAJWV/VUAA1NOzskvTdFNzs9Ly0wHABt6mQENAAAA"),
|
||||
|
||||
out: []byte("#cloud-config"),
|
||||
err: nil,
|
||||
},
|
||||
{
|
||||
in: []byte("#cloud-config"),
|
||||
|
||||
out: []byte("#cloud-config"),
|
||||
err: nil,
|
||||
},
|
||||
{
|
||||
in: mustDecode("H4sCORRUPT=="),
|
||||
|
||||
out: nil,
|
||||
err: errors.New("any error"),
|
||||
},
|
||||
}
|
||||
for i, tt := range tests {
|
||||
out, err := decompressIfGzip(tt.in)
|
||||
if !bytes.Equal(out, tt.out) || (tt.err != nil && err == nil) {
|
||||
t.Errorf("bad gzip (%d): want (%s, %#v), got (%s, %#v)", i, string(tt.out), tt.err, string(out), err)
|
||||
}
|
||||
}
|
||||
|
||||
}
|
27
cover
27
cover
@ -1,27 +0,0 @@
|
||||
#!/bin/bash -e
|
||||
#
|
||||
# Generate coverage HTML for a package
|
||||
# e.g. PKG=./initialize ./cover
|
||||
#
|
||||
|
||||
if [ -z "$PKG" ]; then
|
||||
echo "cover only works with a single package, sorry"
|
||||
exit 255
|
||||
fi
|
||||
|
||||
COVEROUT="coverage"
|
||||
|
||||
if ! [ -d "$COVEROUT" ]; then
|
||||
mkdir "$COVEROUT"
|
||||
fi
|
||||
|
||||
# strip out slashes and dots
|
||||
COVERPKG=${PKG//\//}
|
||||
COVERPKG=${COVERPKG//./}
|
||||
|
||||
# generate arg for "go test"
|
||||
export COVER="-coverprofile ${COVEROUT}/${COVERPKG}.out"
|
||||
|
||||
source ./test
|
||||
|
||||
go tool cover -html=${COVEROUT}/${COVERPKG}.out
|
@ -1,102 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package configdrive
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"os"
|
||||
"path"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
)
|
||||
|
||||
const (
|
||||
openstackApiVersion = "latest"
|
||||
)
|
||||
|
||||
type configDrive struct {
|
||||
root string
|
||||
readFile func(filename string) ([]byte, error)
|
||||
}
|
||||
|
||||
func NewDatasource(root string) *configDrive {
|
||||
return &configDrive{root, ioutil.ReadFile}
|
||||
}
|
||||
|
||||
func (cd *configDrive) IsAvailable() bool {
|
||||
_, err := os.Stat(cd.root)
|
||||
return !os.IsNotExist(err)
|
||||
}
|
||||
|
||||
func (cd *configDrive) AvailabilityChanges() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (cd *configDrive) ConfigRoot() string {
|
||||
return cd.openstackRoot()
|
||||
}
|
||||
|
||||
func (cd *configDrive) FetchMetadata() (metadata datasource.Metadata, err error) {
|
||||
var data []byte
|
||||
var m struct {
|
||||
SSHAuthorizedKeyMap map[string]string `json:"public_keys"`
|
||||
Hostname string `json:"hostname"`
|
||||
NetworkConfig struct {
|
||||
ContentPath string `json:"content_path"`
|
||||
} `json:"network_config"`
|
||||
}
|
||||
|
||||
if data, err = cd.tryReadFile(path.Join(cd.openstackVersionRoot(), "meta_data.json")); err != nil || len(data) == 0 {
|
||||
return
|
||||
}
|
||||
if err = json.Unmarshal([]byte(data), &m); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
metadata.SSHPublicKeys = m.SSHAuthorizedKeyMap
|
||||
metadata.Hostname = m.Hostname
|
||||
if m.NetworkConfig.ContentPath != "" {
|
||||
metadata.NetworkConfig, err = cd.tryReadFile(path.Join(cd.openstackRoot(), m.NetworkConfig.ContentPath))
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (cd *configDrive) FetchUserdata() ([]byte, error) {
|
||||
return cd.tryReadFile(path.Join(cd.openstackVersionRoot(), "user_data"))
|
||||
}
|
||||
|
||||
func (cd *configDrive) Type() string {
|
||||
return "cloud-drive"
|
||||
}
|
||||
|
||||
func (cd *configDrive) openstackRoot() string {
|
||||
return path.Join(cd.root, "openstack")
|
||||
}
|
||||
|
||||
func (cd *configDrive) openstackVersionRoot() string {
|
||||
return path.Join(cd.openstackRoot(), openstackApiVersion)
|
||||
}
|
||||
|
||||
func (cd *configDrive) tryReadFile(filename string) ([]byte, error) {
|
||||
log.Printf("Attempting to read from %q\n", filename)
|
||||
data, err := cd.readFile(filename)
|
||||
if os.IsNotExist(err) {
|
||||
err = nil
|
||||
}
|
||||
return data, err
|
||||
}
|
@ -1,145 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package configdrive
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/test"
|
||||
)
|
||||
|
||||
func TestFetchMetadata(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
files test.MockFilesystem
|
||||
|
||||
metadata datasource.Metadata
|
||||
}{
|
||||
{
|
||||
root: "/",
|
||||
files: test.NewMockFilesystem(test.File{Path: "/openstack/latest/meta_data.json", Contents: ""}),
|
||||
},
|
||||
{
|
||||
root: "/",
|
||||
files: test.NewMockFilesystem(test.File{Path: "/openstack/latest/meta_data.json", Contents: `{"ignore": "me"}`}),
|
||||
},
|
||||
{
|
||||
root: "/",
|
||||
files: test.NewMockFilesystem(test.File{Path: "/openstack/latest/meta_data.json", Contents: `{"hostname": "host"}`}),
|
||||
metadata: datasource.Metadata{Hostname: "host"},
|
||||
},
|
||||
{
|
||||
root: "/media/configdrive",
|
||||
files: test.NewMockFilesystem(test.File{Path: "/media/configdrive/openstack/latest/meta_data.json", Contents: `{"hostname": "host", "network_config": {"content_path": "config_file.json"}, "public_keys":{"1": "key1", "2": "key2"}}`},
|
||||
test.File{Path: "/media/configdrive/openstack/config_file.json", Contents: "make it work"},
|
||||
),
|
||||
metadata: datasource.Metadata{
|
||||
Hostname: "host",
|
||||
NetworkConfig: []byte("make it work"),
|
||||
SSHPublicKeys: map[string]string{
|
||||
"1": "key1",
|
||||
"2": "key2",
|
||||
},
|
||||
},
|
||||
},
|
||||
} {
|
||||
cd := configDrive{tt.root, tt.files.ReadFile}
|
||||
metadata, err := cd.FetchMetadata()
|
||||
if err != nil {
|
||||
t.Fatalf("bad error for %+v: want %v, got %q", tt, nil, err)
|
||||
}
|
||||
if !reflect.DeepEqual(tt.metadata, metadata) {
|
||||
t.Fatalf("bad metadata for %+v: want %#v, got %#v", tt, tt.metadata, metadata)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFetchUserdata(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
files test.MockFilesystem
|
||||
|
||||
userdata string
|
||||
}{
|
||||
{
|
||||
"/",
|
||||
test.NewMockFilesystem(),
|
||||
"",
|
||||
},
|
||||
{
|
||||
"/",
|
||||
test.NewMockFilesystem(test.File{Path: "/openstack/latest/user_data", Contents: "userdata"}),
|
||||
"userdata",
|
||||
},
|
||||
{
|
||||
"/media/configdrive",
|
||||
test.NewMockFilesystem(test.File{Path: "/media/configdrive/openstack/latest/user_data", Contents: "userdata"}),
|
||||
"userdata",
|
||||
},
|
||||
} {
|
||||
cd := configDrive{tt.root, tt.files.ReadFile}
|
||||
userdata, err := cd.FetchUserdata()
|
||||
if err != nil {
|
||||
t.Fatalf("bad error for %+v: want %v, got %q", tt, nil, err)
|
||||
}
|
||||
if string(userdata) != tt.userdata {
|
||||
t.Fatalf("bad userdata for %+v: want %q, got %q", tt, tt.userdata, userdata)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigRoot(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
configRoot string
|
||||
}{
|
||||
{
|
||||
"/",
|
||||
"/openstack",
|
||||
},
|
||||
{
|
||||
"/media/configdrive",
|
||||
"/media/configdrive/openstack",
|
||||
},
|
||||
} {
|
||||
cd := configDrive{tt.root, nil}
|
||||
if configRoot := cd.ConfigRoot(); configRoot != tt.configRoot {
|
||||
t.Fatalf("bad config root for %q: want %q, got %q", tt, tt.configRoot, configRoot)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewDatasource(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
expectRoot string
|
||||
}{
|
||||
{
|
||||
root: "",
|
||||
expectRoot: "",
|
||||
},
|
||||
{
|
||||
root: "/media/configdrive",
|
||||
expectRoot: "/media/configdrive",
|
||||
},
|
||||
} {
|
||||
service := NewDatasource(tt.root)
|
||||
if service.root != tt.expectRoot {
|
||||
t.Fatalf("bad root (%q): want %q, got %q", tt.root, tt.expectRoot, service.root)
|
||||
}
|
||||
}
|
||||
}
|
@ -1,38 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package datasource
|
||||
|
||||
import (
|
||||
"net"
|
||||
)
|
||||
|
||||
type Datasource interface {
|
||||
IsAvailable() bool
|
||||
AvailabilityChanges() bool
|
||||
ConfigRoot() string
|
||||
FetchMetadata() (Metadata, error)
|
||||
FetchUserdata() ([]byte, error)
|
||||
Type() string
|
||||
}
|
||||
|
||||
type Metadata struct {
|
||||
PublicIPv4 net.IP
|
||||
PublicIPv6 net.IP
|
||||
PrivateIPv4 net.IP
|
||||
PrivateIPv6 net.IP
|
||||
Hostname string
|
||||
SSHPublicKeys map[string]string
|
||||
NetworkConfig interface{}
|
||||
}
|
@ -1,55 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package file
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
)
|
||||
|
||||
type localFile struct {
|
||||
path string
|
||||
}
|
||||
|
||||
func NewDatasource(path string) *localFile {
|
||||
return &localFile{path}
|
||||
}
|
||||
|
||||
func (f *localFile) IsAvailable() bool {
|
||||
_, err := os.Stat(f.path)
|
||||
return !os.IsNotExist(err)
|
||||
}
|
||||
|
||||
func (f *localFile) AvailabilityChanges() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (f *localFile) ConfigRoot() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (f *localFile) FetchMetadata() (datasource.Metadata, error) {
|
||||
return datasource.Metadata{}, nil
|
||||
}
|
||||
|
||||
func (f *localFile) FetchUserdata() ([]byte, error) {
|
||||
return ioutil.ReadFile(f.path)
|
||||
}
|
||||
|
||||
func (f *localFile) Type() string {
|
||||
return "local-file"
|
||||
}
|
@ -1,111 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package digitalocean
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net"
|
||||
"strconv"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata"
|
||||
)
|
||||
|
||||
const (
|
||||
DefaultAddress = "http://169.254.169.254/"
|
||||
apiVersion = "metadata/v1"
|
||||
userdataUrl = apiVersion + "/user-data"
|
||||
metadataPath = apiVersion + ".json"
|
||||
)
|
||||
|
||||
type Address struct {
|
||||
IPAddress string `json:"ip_address"`
|
||||
Netmask string `json:"netmask"`
|
||||
Cidr int `json:"cidr"`
|
||||
Gateway string `json:"gateway"`
|
||||
}
|
||||
|
||||
type Interface struct {
|
||||
IPv4 *Address `json:"ipv4"`
|
||||
IPv6 *Address `json:"ipv6"`
|
||||
AnchorIPv4 *Address `json:"anchor_ipv4"`
|
||||
MAC string `json:"mac"`
|
||||
Type string `json:"type"`
|
||||
}
|
||||
|
||||
type Interfaces struct {
|
||||
Public []Interface `json:"public"`
|
||||
Private []Interface `json:"private"`
|
||||
}
|
||||
|
||||
type DNS struct {
|
||||
Nameservers []string `json:"nameservers"`
|
||||
}
|
||||
|
||||
type Metadata struct {
|
||||
Hostname string `json:"hostname"`
|
||||
Interfaces Interfaces `json:"interfaces"`
|
||||
PublicKeys []string `json:"public_keys"`
|
||||
DNS DNS `json:"dns"`
|
||||
}
|
||||
|
||||
type metadataService struct {
|
||||
metadata.MetadataService
|
||||
}
|
||||
|
||||
func NewDatasource(root string) *metadataService {
|
||||
return &metadataService{MetadataService: metadata.NewDatasource(root, apiVersion, userdataUrl, metadataPath)}
|
||||
}
|
||||
|
||||
func (ms *metadataService) FetchMetadata() (metadata datasource.Metadata, err error) {
|
||||
var data []byte
|
||||
var m Metadata
|
||||
|
||||
if data, err = ms.FetchData(ms.MetadataUrl()); err != nil || len(data) == 0 {
|
||||
return
|
||||
}
|
||||
if err = json.Unmarshal(data, &m); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if len(m.Interfaces.Public) > 0 {
|
||||
if m.Interfaces.Public[0].IPv4 != nil {
|
||||
metadata.PublicIPv4 = net.ParseIP(m.Interfaces.Public[0].IPv4.IPAddress)
|
||||
}
|
||||
if m.Interfaces.Public[0].IPv6 != nil {
|
||||
metadata.PublicIPv6 = net.ParseIP(m.Interfaces.Public[0].IPv6.IPAddress)
|
||||
}
|
||||
}
|
||||
if len(m.Interfaces.Private) > 0 {
|
||||
if m.Interfaces.Private[0].IPv4 != nil {
|
||||
metadata.PrivateIPv4 = net.ParseIP(m.Interfaces.Private[0].IPv4.IPAddress)
|
||||
}
|
||||
if m.Interfaces.Private[0].IPv6 != nil {
|
||||
metadata.PrivateIPv6 = net.ParseIP(m.Interfaces.Private[0].IPv6.IPAddress)
|
||||
}
|
||||
}
|
||||
metadata.Hostname = m.Hostname
|
||||
metadata.SSHPublicKeys = map[string]string{}
|
||||
for i, key := range m.PublicKeys {
|
||||
metadata.SSHPublicKeys[strconv.Itoa(i)] = key
|
||||
}
|
||||
metadata.NetworkConfig = m
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (ms metadataService) Type() string {
|
||||
return "digitalocean-metadata-service"
|
||||
}
|
@ -1,143 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package digitalocean
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata/test"
|
||||
"github.com/coreos/coreos-cloudinit/pkg"
|
||||
)
|
||||
|
||||
func TestType(t *testing.T) {
|
||||
want := "digitalocean-metadata-service"
|
||||
if kind := (metadataService{}).Type(); kind != want {
|
||||
t.Fatalf("bad type: want %q, got %q", want, kind)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFetchMetadata(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
metadataPath string
|
||||
resources map[string]string
|
||||
expect datasource.Metadata
|
||||
clientErr error
|
||||
expectErr error
|
||||
}{
|
||||
{
|
||||
root: "/",
|
||||
metadataPath: "v1.json",
|
||||
resources: map[string]string{
|
||||
"/v1.json": "bad",
|
||||
},
|
||||
expectErr: fmt.Errorf("invalid character 'b' looking for beginning of value"),
|
||||
},
|
||||
{
|
||||
root: "/",
|
||||
metadataPath: "v1.json",
|
||||
resources: map[string]string{
|
||||
"/v1.json": `{
|
||||
"droplet_id": 1,
|
||||
"user_data": "hello",
|
||||
"vendor_data": "hello",
|
||||
"public_keys": [
|
||||
"publickey1",
|
||||
"publickey2"
|
||||
],
|
||||
"region": "nyc2",
|
||||
"interfaces": {
|
||||
"public": [
|
||||
{
|
||||
"ipv4": {
|
||||
"ip_address": "192.168.1.2",
|
||||
"netmask": "255.255.255.0",
|
||||
"gateway": "192.168.1.1"
|
||||
},
|
||||
"ipv6": {
|
||||
"ip_address": "fe00::",
|
||||
"cidr": 126,
|
||||
"gateway": "fe00::"
|
||||
},
|
||||
"mac": "ab:cd:ef:gh:ij",
|
||||
"type": "public"
|
||||
}
|
||||
]
|
||||
}
|
||||
}`,
|
||||
},
|
||||
expect: datasource.Metadata{
|
||||
PublicIPv4: net.ParseIP("192.168.1.2"),
|
||||
PublicIPv6: net.ParseIP("fe00::"),
|
||||
SSHPublicKeys: map[string]string{
|
||||
"0": "publickey1",
|
||||
"1": "publickey2",
|
||||
},
|
||||
NetworkConfig: Metadata{
|
||||
Interfaces: Interfaces{
|
||||
Public: []Interface{
|
||||
Interface{
|
||||
IPv4: &Address{
|
||||
IPAddress: "192.168.1.2",
|
||||
Netmask: "255.255.255.0",
|
||||
Gateway: "192.168.1.1",
|
||||
},
|
||||
IPv6: &Address{
|
||||
IPAddress: "fe00::",
|
||||
Cidr: 126,
|
||||
Gateway: "fe00::",
|
||||
},
|
||||
MAC: "ab:cd:ef:gh:ij",
|
||||
Type: "public",
|
||||
},
|
||||
},
|
||||
},
|
||||
PublicKeys: []string{"publickey1", "publickey2"},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
clientErr: pkg.ErrTimeout{Err: fmt.Errorf("test error")},
|
||||
expectErr: pkg.ErrTimeout{Err: fmt.Errorf("test error")},
|
||||
},
|
||||
} {
|
||||
service := &metadataService{
|
||||
MetadataService: metadata.MetadataService{
|
||||
Root: tt.root,
|
||||
Client: &test.HttpClient{Resources: tt.resources, Err: tt.clientErr},
|
||||
MetadataPath: tt.metadataPath,
|
||||
},
|
||||
}
|
||||
metadata, err := service.FetchMetadata()
|
||||
if Error(err) != Error(tt.expectErr) {
|
||||
t.Fatalf("bad error (%q): want %q, got %q", tt.resources, tt.expectErr, err)
|
||||
}
|
||||
if !reflect.DeepEqual(tt.expect, metadata) {
|
||||
t.Fatalf("bad fetch (%q): want %#q, got %#q", tt.resources, tt.expect, metadata)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func Error(err error) string {
|
||||
if err != nil {
|
||||
return err.Error()
|
||||
}
|
||||
return ""
|
||||
}
|
@ -1,115 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package ec2
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"fmt"
|
||||
"log"
|
||||
"net"
|
||||
"strings"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata"
|
||||
"github.com/coreos/coreos-cloudinit/pkg"
|
||||
)
|
||||
|
||||
const (
|
||||
DefaultAddress = "http://169.254.169.254/"
|
||||
apiVersion = "2009-04-04/"
|
||||
userdataPath = apiVersion + "user-data"
|
||||
metadataPath = apiVersion + "meta-data"
|
||||
)
|
||||
|
||||
type metadataService struct {
|
||||
metadata.MetadataService
|
||||
}
|
||||
|
||||
func NewDatasource(root string) *metadataService {
|
||||
return &metadataService{metadata.NewDatasource(root, apiVersion, userdataPath, metadataPath)}
|
||||
}
|
||||
|
||||
func (ms metadataService) FetchMetadata() (datasource.Metadata, error) {
|
||||
metadata := datasource.Metadata{}
|
||||
|
||||
if keynames, err := ms.fetchAttributes(fmt.Sprintf("%s/public-keys", ms.MetadataUrl())); err == nil {
|
||||
keyIDs := make(map[string]string)
|
||||
for _, keyname := range keynames {
|
||||
tokens := strings.SplitN(keyname, "=", 2)
|
||||
if len(tokens) != 2 {
|
||||
return metadata, fmt.Errorf("malformed public key: %q", keyname)
|
||||
}
|
||||
keyIDs[tokens[1]] = tokens[0]
|
||||
}
|
||||
|
||||
metadata.SSHPublicKeys = map[string]string{}
|
||||
for name, id := range keyIDs {
|
||||
sshkey, err := ms.fetchAttribute(fmt.Sprintf("%s/public-keys/%s/openssh-key", ms.MetadataUrl(), id))
|
||||
if err != nil {
|
||||
return metadata, err
|
||||
}
|
||||
metadata.SSHPublicKeys[name] = sshkey
|
||||
log.Printf("Found SSH key for %q\n", name)
|
||||
}
|
||||
} else if _, ok := err.(pkg.ErrNotFound); !ok {
|
||||
return metadata, err
|
||||
}
|
||||
|
||||
if hostname, err := ms.fetchAttribute(fmt.Sprintf("%s/hostname", ms.MetadataUrl())); err == nil {
|
||||
metadata.Hostname = strings.Split(hostname, " ")[0]
|
||||
} else if _, ok := err.(pkg.ErrNotFound); !ok {
|
||||
return metadata, err
|
||||
}
|
||||
|
||||
if localAddr, err := ms.fetchAttribute(fmt.Sprintf("%s/local-ipv4", ms.MetadataUrl())); err == nil {
|
||||
metadata.PrivateIPv4 = net.ParseIP(localAddr)
|
||||
} else if _, ok := err.(pkg.ErrNotFound); !ok {
|
||||
return metadata, err
|
||||
}
|
||||
|
||||
if publicAddr, err := ms.fetchAttribute(fmt.Sprintf("%s/public-ipv4", ms.MetadataUrl())); err == nil {
|
||||
metadata.PublicIPv4 = net.ParseIP(publicAddr)
|
||||
} else if _, ok := err.(pkg.ErrNotFound); !ok {
|
||||
return metadata, err
|
||||
}
|
||||
|
||||
return metadata, nil
|
||||
}
|
||||
|
||||
func (ms metadataService) Type() string {
|
||||
return "ec2-metadata-service"
|
||||
}
|
||||
|
||||
func (ms metadataService) fetchAttributes(url string) ([]string, error) {
|
||||
resp, err := ms.FetchData(url)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
scanner := bufio.NewScanner(bytes.NewBuffer(resp))
|
||||
data := make([]string, 0)
|
||||
for scanner.Scan() {
|
||||
data = append(data, scanner.Text())
|
||||
}
|
||||
return data, scanner.Err()
|
||||
}
|
||||
|
||||
func (ms metadataService) fetchAttribute(url string) (string, error) {
|
||||
if attrs, err := ms.fetchAttributes(url); err == nil && len(attrs) > 0 {
|
||||
return attrs[0], nil
|
||||
} else {
|
||||
return "", err
|
||||
}
|
||||
}
|
@ -1,222 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package ec2
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata/test"
|
||||
"github.com/coreos/coreos-cloudinit/pkg"
|
||||
)
|
||||
|
||||
func TestType(t *testing.T) {
|
||||
want := "ec2-metadata-service"
|
||||
if kind := (metadataService{}).Type(); kind != want {
|
||||
t.Fatalf("bad type: want %q, got %q", want, kind)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFetchAttributes(t *testing.T) {
|
||||
for _, s := range []struct {
|
||||
resources map[string]string
|
||||
err error
|
||||
tests []struct {
|
||||
path string
|
||||
val []string
|
||||
}
|
||||
}{
|
||||
{
|
||||
resources: map[string]string{
|
||||
"/": "a\nb\nc/",
|
||||
"/c/": "d\ne/",
|
||||
"/c/e/": "f",
|
||||
"/a": "1",
|
||||
"/b": "2",
|
||||
"/c/d": "3",
|
||||
"/c/e/f": "4",
|
||||
},
|
||||
tests: []struct {
|
||||
path string
|
||||
val []string
|
||||
}{
|
||||
{"/", []string{"a", "b", "c/"}},
|
||||
{"/b", []string{"2"}},
|
||||
{"/c/d", []string{"3"}},
|
||||
{"/c/e/", []string{"f"}},
|
||||
},
|
||||
},
|
||||
{
|
||||
err: fmt.Errorf("test error"),
|
||||
tests: []struct {
|
||||
path string
|
||||
val []string
|
||||
}{
|
||||
{"", nil},
|
||||
},
|
||||
},
|
||||
} {
|
||||
service := metadataService{metadata.MetadataService{
|
||||
Client: &test.HttpClient{Resources: s.resources, Err: s.err},
|
||||
}}
|
||||
for _, tt := range s.tests {
|
||||
attrs, err := service.fetchAttributes(tt.path)
|
||||
if err != s.err {
|
||||
t.Fatalf("bad error for %q (%q): want %q, got %q", tt.path, s.resources, s.err, err)
|
||||
}
|
||||
if !reflect.DeepEqual(attrs, tt.val) {
|
||||
t.Fatalf("bad fetch for %q (%q): want %q, got %q", tt.path, s.resources, tt.val, attrs)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFetchAttribute(t *testing.T) {
|
||||
for _, s := range []struct {
|
||||
resources map[string]string
|
||||
err error
|
||||
tests []struct {
|
||||
path string
|
||||
val string
|
||||
}
|
||||
}{
|
||||
{
|
||||
resources: map[string]string{
|
||||
"/": "a\nb\nc/",
|
||||
"/c/": "d\ne/",
|
||||
"/c/e/": "f",
|
||||
"/a": "1",
|
||||
"/b": "2",
|
||||
"/c/d": "3",
|
||||
"/c/e/f": "4",
|
||||
},
|
||||
tests: []struct {
|
||||
path string
|
||||
val string
|
||||
}{
|
||||
{"/a", "1"},
|
||||
{"/b", "2"},
|
||||
{"/c/d", "3"},
|
||||
{"/c/e/f", "4"},
|
||||
},
|
||||
},
|
||||
{
|
||||
err: fmt.Errorf("test error"),
|
||||
tests: []struct {
|
||||
path string
|
||||
val string
|
||||
}{
|
||||
{"", ""},
|
||||
},
|
||||
},
|
||||
} {
|
||||
service := metadataService{metadata.MetadataService{
|
||||
Client: &test.HttpClient{Resources: s.resources, Err: s.err},
|
||||
}}
|
||||
for _, tt := range s.tests {
|
||||
attr, err := service.fetchAttribute(tt.path)
|
||||
if err != s.err {
|
||||
t.Fatalf("bad error for %q (%q): want %q, got %q", tt.path, s.resources, s.err, err)
|
||||
}
|
||||
if attr != tt.val {
|
||||
t.Fatalf("bad fetch for %q (%q): want %q, got %q", tt.path, s.resources, tt.val, attr)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFetchMetadata(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
metadataPath string
|
||||
resources map[string]string
|
||||
expect datasource.Metadata
|
||||
clientErr error
|
||||
expectErr error
|
||||
}{
|
||||
{
|
||||
root: "/",
|
||||
metadataPath: "2009-04-04/meta-data",
|
||||
resources: map[string]string{
|
||||
"/2009-04-04/meta-data/public-keys": "bad\n",
|
||||
},
|
||||
expectErr: fmt.Errorf("malformed public key: \"bad\""),
|
||||
},
|
||||
{
|
||||
root: "/",
|
||||
metadataPath: "2009-04-04/meta-data",
|
||||
resources: map[string]string{
|
||||
"/2009-04-04/meta-data/hostname": "host",
|
||||
"/2009-04-04/meta-data/local-ipv4": "1.2.3.4",
|
||||
"/2009-04-04/meta-data/public-ipv4": "5.6.7.8",
|
||||
"/2009-04-04/meta-data/public-keys": "0=test1\n",
|
||||
"/2009-04-04/meta-data/public-keys/0": "openssh-key",
|
||||
"/2009-04-04/meta-data/public-keys/0/openssh-key": "key",
|
||||
},
|
||||
expect: datasource.Metadata{
|
||||
Hostname: "host",
|
||||
PrivateIPv4: net.ParseIP("1.2.3.4"),
|
||||
PublicIPv4: net.ParseIP("5.6.7.8"),
|
||||
SSHPublicKeys: map[string]string{"test1": "key"},
|
||||
},
|
||||
},
|
||||
{
|
||||
root: "/",
|
||||
metadataPath: "2009-04-04/meta-data",
|
||||
resources: map[string]string{
|
||||
"/2009-04-04/meta-data/hostname": "host domain another_domain",
|
||||
"/2009-04-04/meta-data/local-ipv4": "1.2.3.4",
|
||||
"/2009-04-04/meta-data/public-ipv4": "5.6.7.8",
|
||||
"/2009-04-04/meta-data/public-keys": "0=test1\n",
|
||||
"/2009-04-04/meta-data/public-keys/0": "openssh-key",
|
||||
"/2009-04-04/meta-data/public-keys/0/openssh-key": "key",
|
||||
},
|
||||
expect: datasource.Metadata{
|
||||
Hostname: "host",
|
||||
PrivateIPv4: net.ParseIP("1.2.3.4"),
|
||||
PublicIPv4: net.ParseIP("5.6.7.8"),
|
||||
SSHPublicKeys: map[string]string{"test1": "key"},
|
||||
},
|
||||
},
|
||||
{
|
||||
clientErr: pkg.ErrTimeout{Err: fmt.Errorf("test error")},
|
||||
expectErr: pkg.ErrTimeout{Err: fmt.Errorf("test error")},
|
||||
},
|
||||
} {
|
||||
service := &metadataService{metadata.MetadataService{
|
||||
Root: tt.root,
|
||||
Client: &test.HttpClient{Resources: tt.resources, Err: tt.clientErr},
|
||||
MetadataPath: tt.metadataPath,
|
||||
}}
|
||||
metadata, err := service.FetchMetadata()
|
||||
if Error(err) != Error(tt.expectErr) {
|
||||
t.Fatalf("bad error (%q): want %q, got %q", tt.resources, tt.expectErr, err)
|
||||
}
|
||||
if !reflect.DeepEqual(tt.expect, metadata) {
|
||||
t.Fatalf("bad fetch (%q): want %#v, got %#v", tt.resources, tt.expect, metadata)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func Error(err error) string {
|
||||
if err != nil {
|
||||
return err.Error()
|
||||
}
|
||||
return ""
|
||||
}
|
@ -1,71 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package metadata
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/pkg"
|
||||
)
|
||||
|
||||
type MetadataService struct {
|
||||
Root string
|
||||
Client pkg.Getter
|
||||
ApiVersion string
|
||||
UserdataPath string
|
||||
MetadataPath string
|
||||
}
|
||||
|
||||
func NewDatasource(root, apiVersion, userdataPath, metadataPath string) MetadataService {
|
||||
if !strings.HasSuffix(root, "/") {
|
||||
root += "/"
|
||||
}
|
||||
return MetadataService{root, pkg.NewHttpClient(), apiVersion, userdataPath, metadataPath}
|
||||
}
|
||||
|
||||
func (ms MetadataService) IsAvailable() bool {
|
||||
_, err := ms.Client.Get(ms.Root + ms.ApiVersion)
|
||||
return (err == nil)
|
||||
}
|
||||
|
||||
func (ms MetadataService) AvailabilityChanges() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (ms MetadataService) ConfigRoot() string {
|
||||
return ms.Root
|
||||
}
|
||||
|
||||
func (ms MetadataService) FetchUserdata() ([]byte, error) {
|
||||
return ms.FetchData(ms.UserdataUrl())
|
||||
}
|
||||
|
||||
func (ms MetadataService) FetchData(url string) ([]byte, error) {
|
||||
if data, err := ms.Client.GetRetry(url); err == nil {
|
||||
return data, err
|
||||
} else if _, ok := err.(pkg.ErrNotFound); ok {
|
||||
return []byte{}, nil
|
||||
} else {
|
||||
return data, err
|
||||
}
|
||||
}
|
||||
|
||||
func (ms MetadataService) MetadataUrl() string {
|
||||
return (ms.Root + ms.MetadataPath)
|
||||
}
|
||||
|
||||
func (ms MetadataService) UserdataUrl() string {
|
||||
return (ms.Root + ms.UserdataPath)
|
||||
}
|
@ -1,185 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package metadata
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata/test"
|
||||
"github.com/coreos/coreos-cloudinit/pkg"
|
||||
)
|
||||
|
||||
func TestAvailabilityChanges(t *testing.T) {
|
||||
want := true
|
||||
if ac := (MetadataService{}).AvailabilityChanges(); ac != want {
|
||||
t.Fatalf("bad AvailabilityChanges: want %t, got %t", want, ac)
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsAvailable(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
apiVersion string
|
||||
resources map[string]string
|
||||
expect bool
|
||||
}{
|
||||
{
|
||||
root: "/",
|
||||
apiVersion: "2009-04-04",
|
||||
resources: map[string]string{
|
||||
"/2009-04-04": "",
|
||||
},
|
||||
expect: true,
|
||||
},
|
||||
{
|
||||
root: "/",
|
||||
resources: map[string]string{},
|
||||
expect: false,
|
||||
},
|
||||
} {
|
||||
service := &MetadataService{
|
||||
Root: tt.root,
|
||||
Client: &test.HttpClient{Resources: tt.resources, Err: nil},
|
||||
ApiVersion: tt.apiVersion,
|
||||
}
|
||||
if a := service.IsAvailable(); a != tt.expect {
|
||||
t.Fatalf("bad isAvailable (%q): want %t, got %t", tt.resources, tt.expect, a)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFetchUserdata(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
userdataPath string
|
||||
resources map[string]string
|
||||
userdata []byte
|
||||
clientErr error
|
||||
expectErr error
|
||||
}{
|
||||
{
|
||||
root: "/",
|
||||
userdataPath: "2009-04-04/user-data",
|
||||
resources: map[string]string{
|
||||
"/2009-04-04/user-data": "hello",
|
||||
},
|
||||
userdata: []byte("hello"),
|
||||
},
|
||||
{
|
||||
root: "/",
|
||||
clientErr: pkg.ErrNotFound{Err: fmt.Errorf("test not found error")},
|
||||
userdata: []byte{},
|
||||
},
|
||||
{
|
||||
root: "/",
|
||||
clientErr: pkg.ErrTimeout{Err: fmt.Errorf("test timeout error")},
|
||||
expectErr: pkg.ErrTimeout{Err: fmt.Errorf("test timeout error")},
|
||||
},
|
||||
} {
|
||||
service := &MetadataService{
|
||||
Root: tt.root,
|
||||
Client: &test.HttpClient{Resources: tt.resources, Err: tt.clientErr},
|
||||
UserdataPath: tt.userdataPath,
|
||||
}
|
||||
data, err := service.FetchUserdata()
|
||||
if Error(err) != Error(tt.expectErr) {
|
||||
t.Fatalf("bad error (%q): want %q, got %q", tt.resources, tt.expectErr, err)
|
||||
}
|
||||
if !bytes.Equal(data, tt.userdata) {
|
||||
t.Fatalf("bad userdata (%q): want %q, got %q", tt.resources, tt.userdata, data)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestUrls(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
userdataPath string
|
||||
metadataPath string
|
||||
expectRoot string
|
||||
userdata string
|
||||
metadata string
|
||||
}{
|
||||
{
|
||||
root: "/",
|
||||
userdataPath: "2009-04-04/user-data",
|
||||
metadataPath: "2009-04-04/meta-data",
|
||||
expectRoot: "/",
|
||||
userdata: "/2009-04-04/user-data",
|
||||
metadata: "/2009-04-04/meta-data",
|
||||
},
|
||||
{
|
||||
root: "http://169.254.169.254/",
|
||||
userdataPath: "2009-04-04/user-data",
|
||||
metadataPath: "2009-04-04/meta-data",
|
||||
expectRoot: "http://169.254.169.254/",
|
||||
userdata: "http://169.254.169.254/2009-04-04/user-data",
|
||||
metadata: "http://169.254.169.254/2009-04-04/meta-data",
|
||||
},
|
||||
} {
|
||||
service := &MetadataService{
|
||||
Root: tt.root,
|
||||
UserdataPath: tt.userdataPath,
|
||||
MetadataPath: tt.metadataPath,
|
||||
}
|
||||
if url := service.UserdataUrl(); url != tt.userdata {
|
||||
t.Fatalf("bad url (%q): want %q, got %q", tt.root, tt.userdata, url)
|
||||
}
|
||||
if url := service.MetadataUrl(); url != tt.metadata {
|
||||
t.Fatalf("bad url (%q): want %q, got %q", tt.root, tt.metadata, url)
|
||||
}
|
||||
if url := service.ConfigRoot(); url != tt.expectRoot {
|
||||
t.Fatalf("bad url (%q): want %q, got %q", tt.root, tt.expectRoot, url)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewDatasource(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
expectRoot string
|
||||
}{
|
||||
{
|
||||
root: "",
|
||||
expectRoot: "/",
|
||||
},
|
||||
{
|
||||
root: "/",
|
||||
expectRoot: "/",
|
||||
},
|
||||
{
|
||||
root: "http://169.254.169.254",
|
||||
expectRoot: "http://169.254.169.254/",
|
||||
},
|
||||
{
|
||||
root: "http://169.254.169.254/",
|
||||
expectRoot: "http://169.254.169.254/",
|
||||
},
|
||||
} {
|
||||
service := NewDatasource(tt.root, "", "", "")
|
||||
if service.Root != tt.expectRoot {
|
||||
t.Fatalf("bad root (%q): want %q, got %q", tt.root, tt.expectRoot, service.Root)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func Error(err error) string {
|
||||
if err != nil {
|
||||
return err.Error()
|
||||
}
|
||||
return ""
|
||||
}
|
@ -1,112 +0,0 @@
|
||||
/*
|
||||
Copyright 2014 CoreOS, Inc.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package openstack
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net"
|
||||
"strconv"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata"
|
||||
)
|
||||
|
||||
const (
|
||||
DefaultAddress = "http://169.254.169.254/"
|
||||
apiVersion = "openstack/latest"
|
||||
userdataUrl = apiVersion + "/user_data"
|
||||
metadataPath = apiVersion + "/meta_data.json"
|
||||
)
|
||||
|
||||
type Address struct {
|
||||
IPAddress string `json:"ip_address"`
|
||||
Netmask string `json:"netmask"`
|
||||
Cidr int `json:"cidr"`
|
||||
Gateway string `json:"gateway"`
|
||||
}
|
||||
|
||||
type Interface struct {
|
||||
IPv4 *Address `json:"ipv4"`
|
||||
IPv6 *Address `json:"ipv6"`
|
||||
MAC string `json:"mac"`
|
||||
Type string `json:"type"`
|
||||
}
|
||||
|
||||
type Interfaces struct {
|
||||
Public []Interface `json:"public"`
|
||||
Private []Interface `json:"private"`
|
||||
}
|
||||
|
||||
type DNS struct {
|
||||
Nameservers []string `json:"nameservers"`
|
||||
}
|
||||
|
||||
type Metadata struct {
|
||||
Hostname string `json:"hostname"`
|
||||
Interfaces Interfaces `json:"interfaces"`
|
||||
PublicKeys map[string]string `json:"public_keys"`
|
||||
DNS DNS `json:"dns"`
|
||||
}
|
||||
|
||||
type metadataService struct {
|
||||
metadata.MetadataService
|
||||
}
|
||||
|
||||
func NewDatasource(root string) *metadataService {
|
||||
return &metadataService{MetadataService: metadata.NewDatasource(root, apiVersion, userdataUrl, metadataPath)}
|
||||
}
|
||||
|
||||
func (ms *metadataService) FetchMetadata() (metadata datasource.Metadata, err error) {
|
||||
var data []byte
|
||||
var m Metadata
|
||||
|
||||
if data, err = ms.FetchData(ms.MetadataUrl()); err != nil || len(data) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
if err = json.Unmarshal(data, &m); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if len(m.Interfaces.Public) > 0 {
|
||||
if m.Interfaces.Public[0].IPv4 != nil {
|
||||
metadata.PublicIPv4 = net.ParseIP(m.Interfaces.Public[0].IPv4.IPAddress)
|
||||
}
|
||||
if m.Interfaces.Public[0].IPv6 != nil {
|
||||
metadata.PublicIPv6 = net.ParseIP(m.Interfaces.Public[0].IPv6.IPAddress)
|
||||
}
|
||||
}
|
||||
if len(m.Interfaces.Private) > 0 {
|
||||
if m.Interfaces.Private[0].IPv4 != nil {
|
||||
metadata.PrivateIPv4 = net.ParseIP(m.Interfaces.Private[0].IPv4.IPAddress)
|
||||
}
|
||||
if m.Interfaces.Private[0].IPv6 != nil {
|
||||
metadata.PrivateIPv6 = net.ParseIP(m.Interfaces.Private[0].IPv6.IPAddress)
|
||||
}
|
||||
}
|
||||
|
||||
metadata.Hostname = m.Hostname
|
||||
metadata.SSHPublicKeys = map[string]string{}
|
||||
metadata.SSHPublicKeys[strconv.Itoa(0)] = m.PublicKeys["root"]
|
||||
metadata.NetworkConfig = data
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (ms metadataService) Type() string {
|
||||
return "openstack-metadata-service"
|
||||
}
|
@ -1,115 +0,0 @@
|
||||
/*
|
||||
Copyright 2014 CoreOS, Inc.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package openstack
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata/test"
|
||||
"github.com/coreos/coreos-cloudinit/pkg"
|
||||
)
|
||||
|
||||
func TestType(t *testing.T) {
|
||||
want := "openstack-metadata-service"
|
||||
if kind := (metadataService{}).Type(); kind != want {
|
||||
t.Fatalf("bad type: want %q, got %q", want, kind)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFetchMetadata(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
metadataPath string
|
||||
resources map[string]string
|
||||
expect []byte
|
||||
clientErr error
|
||||
expectErr error
|
||||
}{
|
||||
{
|
||||
root: "/",
|
||||
metadataPath: "v1.json",
|
||||
resources: map[string]string{
|
||||
"/v1.json": "bad",
|
||||
},
|
||||
expectErr: fmt.Errorf("invalid character 'b' looking for beginning of value"),
|
||||
},
|
||||
{
|
||||
root: "/",
|
||||
metadataPath: "v1.json",
|
||||
resources: map[string]string{
|
||||
"/v1.json": `{
|
||||
"droplet_id": 1,
|
||||
"user_data": "hello",
|
||||
"vendor_data": "hello",
|
||||
"public_keys": [
|
||||
"publickey1",
|
||||
"publickey2"
|
||||
],
|
||||
"region": "nyc2",
|
||||
"interfaces": {
|
||||
"public": [
|
||||
{
|
||||
"ipv4": {
|
||||
"ip_address": "192.168.1.2",
|
||||
"netmask": "255.255.255.0",
|
||||
"gateway": "192.168.1.1"
|
||||
},
|
||||
"ipv6": {
|
||||
"ip_address": "fe00::",
|
||||
"cidr": 126,
|
||||
"gateway": "fe00::"
|
||||
},
|
||||
"mac": "ab:cd:ef:gh:ij",
|
||||
"type": "public"
|
||||
}
|
||||
]
|
||||
}
|
||||
}`,
|
||||
},
|
||||
expect: []byte(`{"hostname":"","public-ipv4":"192.168.1.2","public-ipv6":"fe00::","public_keys":{"0":"publickey1","1":"publickey2"}}`),
|
||||
},
|
||||
{
|
||||
clientErr: pkg.ErrTimeout{Err: fmt.Errorf("test error")},
|
||||
expectErr: pkg.ErrTimeout{Err: fmt.Errorf("test error")},
|
||||
},
|
||||
} {
|
||||
service := &metadataService{
|
||||
MetadataService: metadata.MetadataService{
|
||||
Root: tt.root,
|
||||
Client: &test.HttpClient{Resources: tt.resources, Err: tt.clientErr},
|
||||
MetadataPath: tt.metadataPath,
|
||||
},
|
||||
}
|
||||
metadata, err := service.FetchMetadata()
|
||||
if Error(err) != Error(tt.expectErr) {
|
||||
t.Fatalf("bad error (%q): want %q, got %q", tt.resources, tt.expectErr, err)
|
||||
}
|
||||
if !bytes.Equal(metadata, tt.expect) {
|
||||
t.Fatalf("bad fetch (%q): want %q, got %q", tt.resources, tt.expect, metadata)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func Error(err error) string {
|
||||
if err != nil {
|
||||
return err.Error()
|
||||
}
|
||||
return ""
|
||||
}
|
@ -1,106 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package packet
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net"
|
||||
"strconv"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata"
|
||||
)
|
||||
|
||||
const (
|
||||
DefaultAddress = "https://metadata.packet.net/"
|
||||
apiVersion = ""
|
||||
userdataUrl = "userdata"
|
||||
metadataPath = "metadata"
|
||||
)
|
||||
|
||||
type Netblock struct {
|
||||
Address net.IP `json:"address"`
|
||||
Cidr int `json:"cidr"`
|
||||
Netmask net.IP `json:"netmask"`
|
||||
Gateway net.IP `json:"gateway"`
|
||||
AddressFamily int `json:"address_family"`
|
||||
Public bool `json:"public"`
|
||||
}
|
||||
|
||||
type Nic struct {
|
||||
Name string `json:"name"`
|
||||
Mac string `json:"mac"`
|
||||
}
|
||||
|
||||
type NetworkData struct {
|
||||
Interfaces []Nic `json:"interfaces"`
|
||||
Netblocks []Netblock `json:"addresses"`
|
||||
DNS []net.IP `json:"dns"`
|
||||
}
|
||||
|
||||
// Metadata that will be pulled from the https://metadata.packet.net/metadata only. We have the opportunity to add more later.
|
||||
type Metadata struct {
|
||||
Hostname string `json:"hostname"`
|
||||
SSHKeys []string `json:"ssh_keys"`
|
||||
NetworkData NetworkData `json:"network"`
|
||||
}
|
||||
|
||||
type metadataService struct {
|
||||
metadata.MetadataService
|
||||
}
|
||||
|
||||
func NewDatasource(root string) *metadataService {
|
||||
return &metadataService{MetadataService: metadata.NewDatasource(root, apiVersion, userdataUrl, metadataPath)}
|
||||
}
|
||||
|
||||
func (ms *metadataService) FetchMetadata() (metadata datasource.Metadata, err error) {
|
||||
var data []byte
|
||||
var m Metadata
|
||||
|
||||
if data, err = ms.FetchData(ms.MetadataUrl()); err != nil || len(data) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
if err = json.Unmarshal(data, &m); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if len(m.NetworkData.Netblocks) > 0 {
|
||||
for _, Netblock := range m.NetworkData.Netblocks {
|
||||
if Netblock.AddressFamily == 4 {
|
||||
if Netblock.Public == true {
|
||||
metadata.PublicIPv4 = Netblock.Address
|
||||
} else {
|
||||
metadata.PrivateIPv4 = Netblock.Address
|
||||
}
|
||||
} else {
|
||||
metadata.PublicIPv6 = Netblock.Address
|
||||
}
|
||||
}
|
||||
}
|
||||
metadata.Hostname = m.Hostname
|
||||
metadata.SSHPublicKeys = map[string]string{}
|
||||
for i, key := range m.SSHKeys {
|
||||
metadata.SSHPublicKeys[strconv.Itoa(i)] = key
|
||||
}
|
||||
|
||||
metadata.NetworkConfig = m.NetworkData
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (ms metadataService) Type() string {
|
||||
return "packet-metadata-service"
|
||||
}
|
@ -1,41 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package test
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/pkg"
|
||||
)
|
||||
|
||||
type HttpClient struct {
|
||||
Resources map[string]string
|
||||
Err error
|
||||
}
|
||||
|
||||
func (t *HttpClient) GetRetry(url string) ([]byte, error) {
|
||||
if t.Err != nil {
|
||||
return nil, t.Err
|
||||
}
|
||||
if val, ok := t.Resources[url]; ok {
|
||||
return []byte(val), nil
|
||||
} else {
|
||||
return nil, pkg.ErrNotFound{fmt.Errorf("not found: %q", url)}
|
||||
}
|
||||
}
|
||||
|
||||
func (t *HttpClient) Get(url string) ([]byte, error) {
|
||||
return t.GetRetry(url)
|
||||
}
|
@ -1,110 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package proc_cmdline
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"strings"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/pkg"
|
||||
)
|
||||
|
||||
const (
|
||||
ProcCmdlineLocation = "/proc/cmdline"
|
||||
ProcCmdlineCloudConfigFlag = "cloud-config-url"
|
||||
)
|
||||
|
||||
type procCmdline struct {
|
||||
Location string
|
||||
}
|
||||
|
||||
func NewDatasource() *procCmdline {
|
||||
return &procCmdline{Location: ProcCmdlineLocation}
|
||||
}
|
||||
|
||||
func (c *procCmdline) IsAvailable() bool {
|
||||
contents, err := ioutil.ReadFile(c.Location)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
cmdline := strings.TrimSpace(string(contents))
|
||||
_, err = findCloudConfigURL(cmdline)
|
||||
return (err == nil)
|
||||
}
|
||||
|
||||
func (c *procCmdline) AvailabilityChanges() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
func (c *procCmdline) ConfigRoot() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (c *procCmdline) FetchMetadata() (datasource.Metadata, error) {
|
||||
return datasource.Metadata{}, nil
|
||||
}
|
||||
|
||||
func (c *procCmdline) FetchUserdata() ([]byte, error) {
|
||||
contents, err := ioutil.ReadFile(c.Location)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
cmdline := strings.TrimSpace(string(contents))
|
||||
url, err := findCloudConfigURL(cmdline)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
client := pkg.NewHttpClient()
|
||||
cfg, err := client.GetRetry(url)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return cfg, nil
|
||||
}
|
||||
|
||||
func (c *procCmdline) Type() string {
|
||||
return "proc-cmdline"
|
||||
}
|
||||
|
||||
func findCloudConfigURL(input string) (url string, err error) {
|
||||
err = errors.New("cloud-config-url not found")
|
||||
for _, token := range strings.Split(input, " ") {
|
||||
parts := strings.SplitN(token, "=", 2)
|
||||
|
||||
key := parts[0]
|
||||
key = strings.Replace(key, "_", "-", -1)
|
||||
|
||||
if key != "cloud-config-url" {
|
||||
continue
|
||||
}
|
||||
|
||||
if len(parts) != 2 {
|
||||
log.Printf("Found cloud-config-url in /proc/cmdline with no value, ignoring.")
|
||||
continue
|
||||
}
|
||||
|
||||
url = parts[1]
|
||||
err = nil
|
||||
}
|
||||
|
||||
return
|
||||
}
|
@ -1,102 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package proc_cmdline
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"os"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestParseCmdlineCloudConfigFound(t *testing.T) {
|
||||
tests := []struct {
|
||||
input string
|
||||
expect string
|
||||
}{
|
||||
{
|
||||
"cloud-config-url=example.com",
|
||||
"example.com",
|
||||
},
|
||||
{
|
||||
"cloud_config_url=example.com",
|
||||
"example.com",
|
||||
},
|
||||
{
|
||||
"cloud-config-url cloud-config-url=example.com",
|
||||
"example.com",
|
||||
},
|
||||
{
|
||||
"cloud-config-url= cloud-config-url=example.com",
|
||||
"example.com",
|
||||
},
|
||||
{
|
||||
"cloud-config-url=one.example.com cloud-config-url=two.example.com",
|
||||
"two.example.com",
|
||||
},
|
||||
{
|
||||
"foo=bar cloud-config-url=example.com ping=pong",
|
||||
"example.com",
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
output, err := findCloudConfigURL(tt.input)
|
||||
if output != tt.expect {
|
||||
t.Errorf("Test case %d failed: %s != %s", i, output, tt.expect)
|
||||
}
|
||||
if err != nil {
|
||||
t.Errorf("Test case %d produced error: %v", i, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestProcCmdlineAndFetchConfig(t *testing.T) {
|
||||
|
||||
var (
|
||||
ProcCmdlineTmpl = "foo=bar cloud-config-url=%s/config\n"
|
||||
CloudConfigContent = "#cloud-config\n"
|
||||
)
|
||||
|
||||
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
if r.Method == "GET" && r.RequestURI == "/config" {
|
||||
fmt.Fprint(w, CloudConfigContent)
|
||||
}
|
||||
}))
|
||||
defer ts.Close()
|
||||
|
||||
file, err := ioutil.TempFile(os.TempDir(), "test_proc_cmdline")
|
||||
defer os.Remove(file.Name())
|
||||
if err != nil {
|
||||
t.Errorf("Test produced error: %v", err)
|
||||
}
|
||||
_, err = file.Write([]byte(fmt.Sprintf(ProcCmdlineTmpl, ts.URL)))
|
||||
if err != nil {
|
||||
t.Errorf("Test produced error: %v", err)
|
||||
}
|
||||
|
||||
p := NewDatasource()
|
||||
p.Location = file.Name()
|
||||
cfg, err := p.FetchUserdata()
|
||||
if err != nil {
|
||||
t.Errorf("Test produced error: %v", err)
|
||||
}
|
||||
|
||||
if string(cfg) != CloudConfigContent {
|
||||
t.Errorf("Test failed, response body: %s != %s", cfg, CloudConfigContent)
|
||||
}
|
||||
}
|
@ -1,57 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package test
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path"
|
||||
)
|
||||
|
||||
type MockFilesystem map[string]File
|
||||
|
||||
type File struct {
|
||||
Path string
|
||||
Contents string
|
||||
Directory bool
|
||||
}
|
||||
|
||||
func (m MockFilesystem) ReadFile(filename string) ([]byte, error) {
|
||||
if f, ok := m[path.Clean(filename)]; ok {
|
||||
if f.Directory {
|
||||
return nil, fmt.Errorf("read %s: is a directory", filename)
|
||||
}
|
||||
return []byte(f.Contents), nil
|
||||
}
|
||||
return nil, os.ErrNotExist
|
||||
}
|
||||
|
||||
func NewMockFilesystem(files ...File) MockFilesystem {
|
||||
fs := MockFilesystem{}
|
||||
for _, file := range files {
|
||||
fs[file.Path] = file
|
||||
|
||||
// Create the directories leading up to the file
|
||||
p := path.Dir(file.Path)
|
||||
for p != "/" && p != "." {
|
||||
if f, ok := fs[p]; ok && !f.Directory {
|
||||
panic(fmt.Sprintf("%q already exists and is not a directory (%#v)", p, f))
|
||||
}
|
||||
fs[p] = File{Path: p, Directory: true}
|
||||
p = path.Dir(p)
|
||||
}
|
||||
}
|
||||
return fs
|
||||
}
|
@ -1,115 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package test
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"os"
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestReadFile(t *testing.T) {
|
||||
tests := []struct {
|
||||
filesystem MockFilesystem
|
||||
|
||||
filename string
|
||||
contents string
|
||||
err error
|
||||
}{
|
||||
{
|
||||
filename: "dne",
|
||||
err: os.ErrNotExist,
|
||||
},
|
||||
{
|
||||
filesystem: MockFilesystem{
|
||||
"exists": File{Contents: "hi"},
|
||||
},
|
||||
filename: "exists",
|
||||
contents: "hi",
|
||||
},
|
||||
{
|
||||
filesystem: MockFilesystem{
|
||||
"dir": File{Directory: true},
|
||||
},
|
||||
filename: "dir",
|
||||
err: errors.New("read dir: is a directory"),
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
contents, err := tt.filesystem.ReadFile(tt.filename)
|
||||
if tt.contents != string(contents) {
|
||||
t.Errorf("bad contents (test %d): want %q, got %q", i, tt.contents, string(contents))
|
||||
}
|
||||
if !reflect.DeepEqual(tt.err, err) {
|
||||
t.Errorf("bad error (test %d): want %v, got %v", i, tt.err, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewMockFilesystem(t *testing.T) {
|
||||
tests := []struct {
|
||||
files []File
|
||||
|
||||
filesystem MockFilesystem
|
||||
}{
|
||||
{
|
||||
filesystem: MockFilesystem{},
|
||||
},
|
||||
{
|
||||
files: []File{File{Path: "file"}},
|
||||
filesystem: MockFilesystem{
|
||||
"file": File{Path: "file"},
|
||||
},
|
||||
},
|
||||
{
|
||||
files: []File{File{Path: "/file"}},
|
||||
filesystem: MockFilesystem{
|
||||
"/file": File{Path: "/file"},
|
||||
},
|
||||
},
|
||||
{
|
||||
files: []File{File{Path: "/dir/file"}},
|
||||
filesystem: MockFilesystem{
|
||||
"/dir": File{Path: "/dir", Directory: true},
|
||||
"/dir/file": File{Path: "/dir/file"},
|
||||
},
|
||||
},
|
||||
{
|
||||
files: []File{File{Path: "/dir/dir/file"}},
|
||||
filesystem: MockFilesystem{
|
||||
"/dir": File{Path: "/dir", Directory: true},
|
||||
"/dir/dir": File{Path: "/dir/dir", Directory: true},
|
||||
"/dir/dir/file": File{Path: "/dir/dir/file"},
|
||||
},
|
||||
},
|
||||
{
|
||||
files: []File{File{Path: "/dir/dir/dir", Directory: true}},
|
||||
filesystem: MockFilesystem{
|
||||
"/dir": File{Path: "/dir", Directory: true},
|
||||
"/dir/dir": File{Path: "/dir/dir", Directory: true},
|
||||
"/dir/dir/dir": File{Path: "/dir/dir/dir", Directory: true},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
filesystem := NewMockFilesystem(tt.files...)
|
||||
if !reflect.DeepEqual(tt.filesystem, filesystem) {
|
||||
t.Errorf("bad filesystem (test %d): want %#v, got %#v", i, tt.filesystem, filesystem)
|
||||
}
|
||||
}
|
||||
}
|
@ -1,55 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package url
|
||||
|
||||
import (
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/pkg"
|
||||
)
|
||||
|
||||
type remoteFile struct {
|
||||
url string
|
||||
}
|
||||
|
||||
func NewDatasource(url string) *remoteFile {
|
||||
return &remoteFile{url}
|
||||
}
|
||||
|
||||
func (f *remoteFile) IsAvailable() bool {
|
||||
client := pkg.NewHttpClient()
|
||||
_, err := client.Get(f.url)
|
||||
return (err == nil)
|
||||
}
|
||||
|
||||
func (f *remoteFile) AvailabilityChanges() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (f *remoteFile) ConfigRoot() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (f *remoteFile) FetchMetadata() (datasource.Metadata, error) {
|
||||
return datasource.Metadata{}, nil
|
||||
}
|
||||
|
||||
func (f *remoteFile) FetchUserdata() ([]byte, error) {
|
||||
client := pkg.NewHttpClient()
|
||||
return client.GetRetry(f.url)
|
||||
}
|
||||
|
||||
func (f *remoteFile) Type() string {
|
||||
return "url"
|
||||
}
|
@ -1,183 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package vmware
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"net"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/config"
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/pkg"
|
||||
|
||||
"github.com/sigma/vmw-guestinfo/rpcvmx"
|
||||
"github.com/sigma/vmw-guestinfo/vmcheck"
|
||||
)
|
||||
|
||||
type readConfigFunction func(key string) (string, error)
|
||||
type urlDownloadFunction func(url string) ([]byte, error)
|
||||
|
||||
type vmware struct {
|
||||
readConfig readConfigFunction
|
||||
urlDownload urlDownloadFunction
|
||||
}
|
||||
|
||||
func NewDatasource() *vmware {
|
||||
return &vmware{
|
||||
readConfig: readConfig,
|
||||
urlDownload: urlDownload,
|
||||
}
|
||||
}
|
||||
|
||||
func (v vmware) IsAvailable() bool {
|
||||
return vmcheck.IsVirtualWorld()
|
||||
}
|
||||
|
||||
func (v vmware) AvailabilityChanges() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
func (v vmware) ConfigRoot() string {
|
||||
return "/"
|
||||
}
|
||||
|
||||
func (v vmware) FetchMetadata() (metadata datasource.Metadata, err error) {
|
||||
metadata.Hostname, _ = v.readConfig("hostname")
|
||||
|
||||
netconf := map[string]string{}
|
||||
saveConfig := func(key string, args ...interface{}) string {
|
||||
key = fmt.Sprintf(key, args...)
|
||||
val, _ := v.readConfig(key)
|
||||
if val != "" {
|
||||
netconf[key] = val
|
||||
}
|
||||
return val
|
||||
}
|
||||
|
||||
for i := 0; ; i++ {
|
||||
if nameserver := saveConfig("dns.server.%d", i); nameserver == "" {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
found := true
|
||||
for i := 0; found; i++ {
|
||||
found = false
|
||||
|
||||
found = (saveConfig("interface.%d.name", i) != "") || found
|
||||
found = (saveConfig("interface.%d.mac", i) != "") || found
|
||||
found = (saveConfig("interface.%d.dhcp", i) != "") || found
|
||||
|
||||
role, _ := v.readConfig(fmt.Sprintf("interface.%d.role", i))
|
||||
for a := 0; ; a++ {
|
||||
address := saveConfig("interface.%d.ip.%d.address", i, a)
|
||||
if address == "" {
|
||||
break
|
||||
} else {
|
||||
found = true
|
||||
}
|
||||
|
||||
ip, _, err := net.ParseCIDR(address)
|
||||
if err != nil {
|
||||
return metadata, err
|
||||
}
|
||||
|
||||
switch role {
|
||||
case "public":
|
||||
if ip.To4() != nil {
|
||||
metadata.PublicIPv4 = ip
|
||||
} else {
|
||||
metadata.PublicIPv6 = ip
|
||||
}
|
||||
case "private":
|
||||
if ip.To4() != nil {
|
||||
metadata.PrivateIPv4 = ip
|
||||
} else {
|
||||
metadata.PrivateIPv6 = ip
|
||||
}
|
||||
case "":
|
||||
default:
|
||||
return metadata, fmt.Errorf("unrecognized role: %q", role)
|
||||
}
|
||||
}
|
||||
|
||||
for r := 0; ; r++ {
|
||||
gateway := saveConfig("interface.%d.route.%d.gateway", i, r)
|
||||
destination := saveConfig("interface.%d.route.%d.destination", i, r)
|
||||
|
||||
if gateway == "" && destination == "" {
|
||||
break
|
||||
} else {
|
||||
found = true
|
||||
}
|
||||
}
|
||||
}
|
||||
metadata.NetworkConfig = netconf
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (v vmware) FetchUserdata() ([]byte, error) {
|
||||
encoding, err := v.readConfig("coreos.config.data.encoding")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
data, err := v.readConfig("coreos.config.data")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Try to fallback to url if no explicit data
|
||||
if data == "" {
|
||||
url, err := v.readConfig("coreos.config.url")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if url != "" {
|
||||
rawData, err := v.urlDownload(url)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
data = string(rawData)
|
||||
}
|
||||
}
|
||||
|
||||
if encoding != "" {
|
||||
return config.DecodeContent(data, encoding)
|
||||
}
|
||||
return []byte(data), nil
|
||||
}
|
||||
|
||||
func (v vmware) Type() string {
|
||||
return "vmware"
|
||||
}
|
||||
|
||||
func urlDownload(url string) ([]byte, error) {
|
||||
client := pkg.NewHttpClient()
|
||||
return client.GetRetry(url)
|
||||
}
|
||||
|
||||
func readConfig(key string) (string, error) {
|
||||
data, err := rpcvmx.NewConfig().GetString(key, "")
|
||||
if err == nil {
|
||||
log.Printf("Read from %q: %q\n", key, data)
|
||||
} else {
|
||||
log.Printf("Failed to read from %q: %v\n", key, err)
|
||||
}
|
||||
return data, err
|
||||
}
|
@ -1,216 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package vmware
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"net"
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
)
|
||||
|
||||
type MockHypervisor map[string]string
|
||||
|
||||
func (h MockHypervisor) ReadConfig(key string) (string, error) {
|
||||
return h[key], nil
|
||||
}
|
||||
|
||||
func TestFetchMetadata(t *testing.T) {
|
||||
tests := []struct {
|
||||
variables MockHypervisor
|
||||
|
||||
metadata datasource.Metadata
|
||||
err error
|
||||
}{
|
||||
{
|
||||
variables: map[string]string{
|
||||
"interface.0.mac": "test mac",
|
||||
"interface.0.dhcp": "yes",
|
||||
},
|
||||
metadata: datasource.Metadata{
|
||||
NetworkConfig: map[string]string{
|
||||
"interface.0.mac": "test mac",
|
||||
"interface.0.dhcp": "yes",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
variables: map[string]string{
|
||||
"interface.0.name": "test name",
|
||||
"interface.0.dhcp": "yes",
|
||||
},
|
||||
metadata: datasource.Metadata{
|
||||
NetworkConfig: map[string]string{
|
||||
"interface.0.name": "test name",
|
||||
"interface.0.dhcp": "yes",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
variables: map[string]string{
|
||||
"hostname": "test host",
|
||||
"interface.0.mac": "test mac",
|
||||
"interface.0.role": "private",
|
||||
"interface.0.ip.0.address": "fe00::100/64",
|
||||
"interface.0.route.0.gateway": "fe00::1",
|
||||
"interface.0.route.0.destination": "::",
|
||||
},
|
||||
metadata: datasource.Metadata{
|
||||
Hostname: "test host",
|
||||
PrivateIPv6: net.ParseIP("fe00::100"),
|
||||
NetworkConfig: map[string]string{
|
||||
"interface.0.mac": "test mac",
|
||||
"interface.0.ip.0.address": "fe00::100/64",
|
||||
"interface.0.route.0.gateway": "fe00::1",
|
||||
"interface.0.route.0.destination": "::",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
variables: map[string]string{
|
||||
"hostname": "test host",
|
||||
"interface.0.name": "test name",
|
||||
"interface.0.role": "public",
|
||||
"interface.0.ip.0.address": "10.0.0.100/24",
|
||||
"interface.0.ip.1.address": "10.0.0.101/24",
|
||||
"interface.0.route.0.gateway": "10.0.0.1",
|
||||
"interface.0.route.0.destination": "0.0.0.0",
|
||||
"interface.1.mac": "test mac",
|
||||
"interface.1.role": "private",
|
||||
"interface.1.route.0.gateway": "10.0.0.2",
|
||||
"interface.1.route.0.destination": "0.0.0.0",
|
||||
"interface.1.ip.0.address": "10.0.0.102/24",
|
||||
},
|
||||
metadata: datasource.Metadata{
|
||||
Hostname: "test host",
|
||||
PublicIPv4: net.ParseIP("10.0.0.101"),
|
||||
PrivateIPv4: net.ParseIP("10.0.0.102"),
|
||||
NetworkConfig: map[string]string{
|
||||
"interface.0.name": "test name",
|
||||
"interface.0.ip.0.address": "10.0.0.100/24",
|
||||
"interface.0.ip.1.address": "10.0.0.101/24",
|
||||
"interface.0.route.0.gateway": "10.0.0.1",
|
||||
"interface.0.route.0.destination": "0.0.0.0",
|
||||
"interface.1.mac": "test mac",
|
||||
"interface.1.route.0.gateway": "10.0.0.2",
|
||||
"interface.1.route.0.destination": "0.0.0.0",
|
||||
"interface.1.ip.0.address": "10.0.0.102/24",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
v := vmware{readConfig: tt.variables.ReadConfig}
|
||||
metadata, err := v.FetchMetadata()
|
||||
if !reflect.DeepEqual(tt.err, err) {
|
||||
t.Errorf("bad error (#%d): want %v, got %v", i, tt.err, err)
|
||||
}
|
||||
if !reflect.DeepEqual(tt.metadata, metadata) {
|
||||
t.Errorf("bad metadata (#%d): want %#v, got %#v", i, tt.metadata, metadata)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFetchUserdata(t *testing.T) {
|
||||
tests := []struct {
|
||||
variables MockHypervisor
|
||||
|
||||
userdata string
|
||||
err error
|
||||
}{
|
||||
{},
|
||||
{
|
||||
variables: map[string]string{"coreos.config.data": "test config"},
|
||||
userdata: "test config",
|
||||
},
|
||||
{
|
||||
variables: map[string]string{
|
||||
"coreos.config.data.encoding": "",
|
||||
"coreos.config.data": "test config",
|
||||
},
|
||||
userdata: "test config",
|
||||
},
|
||||
{
|
||||
variables: map[string]string{
|
||||
"coreos.config.data.encoding": "base64",
|
||||
"coreos.config.data": "dGVzdCBjb25maWc=",
|
||||
},
|
||||
userdata: "test config",
|
||||
},
|
||||
{
|
||||
variables: map[string]string{
|
||||
"coreos.config.data.encoding": "gzip+base64",
|
||||
"coreos.config.data": "H4sIABaoWlUAAytJLS5RSM7PS8tMBwCQiHNZCwAAAA==",
|
||||
},
|
||||
userdata: "test config",
|
||||
},
|
||||
{
|
||||
variables: map[string]string{
|
||||
"coreos.config.data.encoding": "test encoding",
|
||||
},
|
||||
err: errors.New(`Unsupported encoding "test encoding"`),
|
||||
},
|
||||
{
|
||||
variables: map[string]string{
|
||||
"coreos.config.url": "http://good.example.com",
|
||||
},
|
||||
userdata: "test config",
|
||||
},
|
||||
{
|
||||
variables: map[string]string{
|
||||
"coreos.config.url": "http://bad.example.com",
|
||||
},
|
||||
err: errors.New("Not found"),
|
||||
},
|
||||
}
|
||||
|
||||
var downloader urlDownloadFunction = func(url string) ([]byte, error) {
|
||||
mapping := map[string]struct {
|
||||
data []byte
|
||||
err error
|
||||
}{
|
||||
"http://good.example.com": {[]byte("test config"), nil},
|
||||
"http://bad.example.com": {nil, errors.New("Not found")},
|
||||
}
|
||||
val := mapping[url]
|
||||
return val.data, val.err
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
v := vmware{
|
||||
readConfig: tt.variables.ReadConfig,
|
||||
urlDownload: downloader,
|
||||
}
|
||||
userdata, err := v.FetchUserdata()
|
||||
if !reflect.DeepEqual(tt.err, err) {
|
||||
t.Errorf("bad error (#%d): want %v, got %v", i, tt.err, err)
|
||||
}
|
||||
if tt.userdata != string(userdata) {
|
||||
t.Errorf("bad userdata (#%d): want %q, got %q", i, tt.userdata, userdata)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFetchUserdataError(t *testing.T) {
|
||||
testErr := errors.New("test error")
|
||||
_, err := vmware{readConfig: func(_ string) (string, error) { return "", testErr }}.FetchUserdata()
|
||||
|
||||
if testErr != err {
|
||||
t.Errorf("bad error: want %v, got %v", testErr, err)
|
||||
}
|
||||
}
|
@ -1,117 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package waagent
|
||||
|
||||
import (
|
||||
"encoding/xml"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"net"
|
||||
"os"
|
||||
"path"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
)
|
||||
|
||||
type waagent struct {
|
||||
root string
|
||||
readFile func(filename string) ([]byte, error)
|
||||
}
|
||||
|
||||
func NewDatasource(root string) *waagent {
|
||||
return &waagent{root, ioutil.ReadFile}
|
||||
}
|
||||
|
||||
func (a *waagent) IsAvailable() bool {
|
||||
_, err := os.Stat(path.Join(a.root, "provisioned"))
|
||||
return !os.IsNotExist(err)
|
||||
}
|
||||
|
||||
func (a *waagent) AvailabilityChanges() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (a *waagent) ConfigRoot() string {
|
||||
return a.root
|
||||
}
|
||||
|
||||
func (a *waagent) FetchMetadata() (metadata datasource.Metadata, err error) {
|
||||
var metadataBytes []byte
|
||||
if metadataBytes, err = a.tryReadFile(path.Join(a.root, "SharedConfig.xml")); err != nil {
|
||||
return
|
||||
}
|
||||
if len(metadataBytes) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
type Instance struct {
|
||||
Id string `xml:"id,attr"`
|
||||
Address string `xml:"address,attr"`
|
||||
InputEndpoints struct {
|
||||
Endpoints []struct {
|
||||
LoadBalancedPublicAddress string `xml:"loadBalancedPublicAddress,attr"`
|
||||
} `xml:"Endpoint"`
|
||||
}
|
||||
}
|
||||
|
||||
type SharedConfig struct {
|
||||
Incarnation struct {
|
||||
Instance string `xml:"instance,attr"`
|
||||
}
|
||||
Instances struct {
|
||||
Instances []Instance `xml:"Instance"`
|
||||
}
|
||||
}
|
||||
|
||||
var m SharedConfig
|
||||
if err = xml.Unmarshal(metadataBytes, &m); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
var instance Instance
|
||||
for _, i := range m.Instances.Instances {
|
||||
if i.Id == m.Incarnation.Instance {
|
||||
instance = i
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
metadata.PrivateIPv4 = net.ParseIP(instance.Address)
|
||||
for _, e := range instance.InputEndpoints.Endpoints {
|
||||
host, _, err := net.SplitHostPort(e.LoadBalancedPublicAddress)
|
||||
if err == nil {
|
||||
metadata.PublicIPv4 = net.ParseIP(host)
|
||||
break
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (a *waagent) FetchUserdata() ([]byte, error) {
|
||||
return a.tryReadFile(path.Join(a.root, "CustomData"))
|
||||
}
|
||||
|
||||
func (a *waagent) Type() string {
|
||||
return "waagent"
|
||||
}
|
||||
|
||||
func (a *waagent) tryReadFile(filename string) ([]byte, error) {
|
||||
log.Printf("Attempting to read from %q\n", filename)
|
||||
data, err := a.readFile(filename)
|
||||
if os.IsNotExist(err) {
|
||||
err = nil
|
||||
}
|
||||
return data, err
|
||||
}
|
@ -1,166 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package waagent
|
||||
|
||||
import (
|
||||
"net"
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/test"
|
||||
)
|
||||
|
||||
func TestFetchMetadata(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
files test.MockFilesystem
|
||||
metadata datasource.Metadata
|
||||
}{
|
||||
{
|
||||
root: "/",
|
||||
files: test.NewMockFilesystem(),
|
||||
},
|
||||
{
|
||||
root: "/",
|
||||
files: test.NewMockFilesystem(test.File{Path: "/SharedConfig.xml", Contents: ""}),
|
||||
},
|
||||
{
|
||||
root: "/var/lib/waagent",
|
||||
files: test.NewMockFilesystem(test.File{Path: "/var/lib/waagent/SharedConfig.xml", Contents: ""}),
|
||||
},
|
||||
{
|
||||
root: "/var/lib/waagent",
|
||||
files: test.NewMockFilesystem(test.File{Path: "/var/lib/waagent/SharedConfig.xml", Contents: `<?xml version="1.0" encoding="utf-8"?>
|
||||
<SharedConfig version="1.0.0.0" goalStateIncarnation="1">
|
||||
<Deployment name="c8f9e4c9c18948e1bebf57c5685da756" guid="{1d10394f-c741-4a1a-a6bb-278f213c5a5e}" incarnation="0" isNonCancellableTopologyChangeEnabled="false">
|
||||
<Service name="core-test-1" guid="{00000000-0000-0000-0000-000000000000}" />
|
||||
<ServiceInstance name="c8f9e4c9c18948e1bebf57c5685da756.0" guid="{1e202e9a-8ffe-4915-b6ef-4118c9628fda}" />
|
||||
</Deployment>
|
||||
<Incarnation number="1" instance="core-test-1" guid="{8767eb4b-b445-4783-b1f5-6c0beaf41ea0}" />
|
||||
<Role guid="{53ecc81e-257f-fbc9-a53a-8cf1a0a122b4}" name="core-test-1" settleTimeSeconds="0" />
|
||||
<LoadBalancerSettings timeoutSeconds="0" waitLoadBalancerProbeCount="8">
|
||||
<Probes>
|
||||
<Probe name="D41D8CD98F00B204E9800998ECF8427E" />
|
||||
<Probe name="C9DEC1518E1158748FA4B6081A8266DD" />
|
||||
</Probes>
|
||||
</LoadBalancerSettings>
|
||||
<OutputEndpoints>
|
||||
<Endpoint name="core-test-1:openInternalEndpoint" type="SFS">
|
||||
<Target instance="core-test-1" endpoint="openInternalEndpoint" />
|
||||
</Endpoint>
|
||||
</OutputEndpoints>
|
||||
<Instances>
|
||||
<Instance id="core-test-1" address="100.73.202.64">
|
||||
<FaultDomains randomId="0" updateId="0" updateCount="0" />
|
||||
<InputEndpoints>
|
||||
<Endpoint name="openInternalEndpoint" address="100.73.202.64" protocol="any" isPublic="false" enableDirectServerReturn="false" isDirectAddress="false" disableStealthMode="false">
|
||||
<LocalPorts>
|
||||
<LocalPortSelfManaged />
|
||||
</LocalPorts>
|
||||
</Endpoint>
|
||||
<Endpoint name="ssh" address="100.73.202.64:22" protocol="tcp" hostName="core-test-1ContractContract" isPublic="true" loadBalancedPublicAddress="191.239.39.77:22" enableDirectServerReturn="false" isDirectAddress="false" disableStealthMode="false">
|
||||
<LocalPorts>
|
||||
<LocalPortRange from="22" to="22" />
|
||||
</LocalPorts>
|
||||
</Endpoint>
|
||||
</InputEndpoints>
|
||||
</Instance>
|
||||
</Instances>
|
||||
</SharedConfig>`}),
|
||||
metadata: datasource.Metadata{
|
||||
PrivateIPv4: net.ParseIP("100.73.202.64"),
|
||||
PublicIPv4: net.ParseIP("191.239.39.77"),
|
||||
},
|
||||
},
|
||||
} {
|
||||
a := waagent{tt.root, tt.files.ReadFile}
|
||||
metadata, err := a.FetchMetadata()
|
||||
if err != nil {
|
||||
t.Fatalf("bad error for %+v: want %v, got %q", tt, nil, err)
|
||||
}
|
||||
if !reflect.DeepEqual(tt.metadata, metadata) {
|
||||
t.Fatalf("bad metadata for %+v: want %#v, got %#v", tt, tt.metadata, metadata)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFetchUserdata(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
files test.MockFilesystem
|
||||
}{
|
||||
{
|
||||
"/",
|
||||
test.NewMockFilesystem(),
|
||||
},
|
||||
{
|
||||
"/",
|
||||
test.NewMockFilesystem(test.File{Path: "/CustomData", Contents: ""}),
|
||||
},
|
||||
{
|
||||
"/var/lib/waagent/",
|
||||
test.NewMockFilesystem(test.File{Path: "/var/lib/waagent/CustomData", Contents: ""}),
|
||||
},
|
||||
} {
|
||||
a := waagent{tt.root, tt.files.ReadFile}
|
||||
_, err := a.FetchUserdata()
|
||||
if err != nil {
|
||||
t.Fatalf("bad error for %+v: want %v, got %q", tt, nil, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigRoot(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
configRoot string
|
||||
}{
|
||||
{
|
||||
"/",
|
||||
"/",
|
||||
},
|
||||
{
|
||||
"/var/lib/waagent",
|
||||
"/var/lib/waagent",
|
||||
},
|
||||
} {
|
||||
a := waagent{tt.root, nil}
|
||||
if configRoot := a.ConfigRoot(); configRoot != tt.configRoot {
|
||||
t.Fatalf("bad config root for %q: want %q, got %q", tt, tt.configRoot, configRoot)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewDatasource(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
expectRoot string
|
||||
}{
|
||||
{
|
||||
root: "",
|
||||
expectRoot: "",
|
||||
},
|
||||
{
|
||||
root: "/var/lib/waagent",
|
||||
expectRoot: "/var/lib/waagent",
|
||||
},
|
||||
} {
|
||||
service := NewDatasource(tt.root)
|
||||
if service.root != tt.expectRoot {
|
||||
t.Fatalf("bad root (%q): want %q, got %q", tt.root, tt.expectRoot, service.root)
|
||||
}
|
||||
}
|
||||
}
|
@ -1,11 +0,0 @@
|
||||
# Automatically trigger configdrive mounting.
|
||||
|
||||
ACTION!="add|change", GOTO="coreos_configdrive_end"
|
||||
|
||||
# A normal config drive. Block device formatted with iso9660 or fat
|
||||
SUBSYSTEM=="block", ENV{ID_FS_TYPE}=="iso9660|vfat", ENV{ID_FS_LABEL}=="config-2", TAG+="systemd", ENV{SYSTEMD_WANTS}+="media-configdrive.mount"
|
||||
|
||||
# Addtionally support virtfs from QEMU
|
||||
SUBSYSTEM=="virtio", DRIVER=="9pnet_virtio", ATTR{mount_tag}=="config-2", TAG+="systemd", ENV{SYSTEMD_WANTS}+="media-configvirtfs.mount"
|
||||
|
||||
LABEL="coreos_configdrive_end"
|
@ -1,13 +0,0 @@
|
||||
[Unit]
|
||||
Wants=user-configdrive.service
|
||||
Before=user-configdrive.service
|
||||
# Only mount config drive block devices automatically in virtual machines
|
||||
# or any host that has it explicitly enabled and not explicitly disabled.
|
||||
ConditionVirtualization=|vm
|
||||
ConditionKernelCommandLine=|coreos.configdrive=1
|
||||
ConditionKernelCommandLine=!coreos.configdrive=0
|
||||
|
||||
[Mount]
|
||||
What=LABEL=config-2
|
||||
Where=/media/configdrive
|
||||
Options=ro
|
@ -1,18 +0,0 @@
|
||||
[Unit]
|
||||
Wants=user-configvirtfs.service
|
||||
Before=user-configvirtfs.service
|
||||
# Only mount config drive block devices automatically in virtual machines
|
||||
# or any host that has it explicitly enabled and not explicitly disabled.
|
||||
ConditionVirtualization=|vm
|
||||
ConditionKernelCommandLine=|coreos.configdrive=1
|
||||
ConditionKernelCommandLine=!coreos.configdrive=0
|
||||
|
||||
# Support old style setup for now
|
||||
Wants=addon-run@media-configvirtfs.service addon-config@media-configvirtfs.service
|
||||
Before=addon-run@media-configvirtfs.service addon-config@media-configvirtfs.service
|
||||
|
||||
[Mount]
|
||||
What=config-2
|
||||
Where=/media/configvirtfs
|
||||
Options=ro,trans=virtio,version=9p2000.L
|
||||
Type=9p
|
@ -1,11 +0,0 @@
|
||||
[Unit]
|
||||
Description=Load cloud-config from %f
|
||||
Requires=dbus.service
|
||||
After=dbus.service
|
||||
Before=system-config.target
|
||||
ConditionFileNotEmpty=%f
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=yes
|
||||
ExecStart=/usr/bin/coreos-cloudinit --from-file=%f
|
@ -1,10 +0,0 @@
|
||||
[Unit]
|
||||
Description=Load system-provided cloud configs
|
||||
|
||||
# Generate /etc/environment
|
||||
Requires=coreos-setup-environment.service
|
||||
After=coreos-setup-environment.service
|
||||
|
||||
# Load OEM cloud-config.yml
|
||||
Requires=system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service
|
||||
After=system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service
|
@ -1,12 +0,0 @@
|
||||
[Unit]
|
||||
Description=Load cloud-config from url defined in /proc/cmdline
|
||||
Requires=coreos-setup-environment.service
|
||||
After=coreos-setup-environment.service
|
||||
Before=user-config.target
|
||||
ConditionKernelCommandLine=cloud-config-url
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=yes
|
||||
EnvironmentFile=-/etc/environment
|
||||
ExecStart=/usr/bin/coreos-cloudinit --from-proc-cmdline
|
@ -1,5 +0,0 @@
|
||||
[Unit]
|
||||
Description=Watch for a cloud-config at %f
|
||||
|
||||
[Path]
|
||||
PathExists=%f
|
@ -1,12 +0,0 @@
|
||||
[Unit]
|
||||
Description=Load cloud-config from %f
|
||||
Requires=coreos-setup-environment.service
|
||||
After=coreos-setup-environment.service
|
||||
Before=user-config.target
|
||||
ConditionFileNotEmpty=%f
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=yes
|
||||
EnvironmentFile=-/etc/environment
|
||||
ExecStart=/usr/bin/coreos-cloudinit --from-file=%f
|
@ -1,13 +0,0 @@
|
||||
[Unit]
|
||||
Description=Load user-provided cloud configs
|
||||
Requires=system-config.target
|
||||
After=system-config.target
|
||||
|
||||
# Watch for configs at a couple common paths
|
||||
Requires=user-configdrive.path
|
||||
After=user-configdrive.path
|
||||
Requires=user-cloudinit@var-lib-coreos\x2dinstall-user_data.path
|
||||
After=user-cloudinit@var-lib-coreos\x2dinstall-user_data.path
|
||||
|
||||
Requires=user-cloudinit-proc-cmdline.service
|
||||
After=user-cloudinit-proc-cmdline.service
|
@ -1,10 +0,0 @@
|
||||
[Unit]
|
||||
Description=Watch for a cloud-config at /media/configdrive
|
||||
|
||||
# Note: This unit is essentially just here as a fall-back mechanism to
|
||||
# trigger cloudinit if it isn't triggered explicitly by other means
|
||||
# such as by a Wants= in the mount unit. This ensures we handle the
|
||||
# case where /media/configdrive is provided to a CoreOS container.
|
||||
|
||||
[Path]
|
||||
DirectoryNotEmpty=/media/configdrive
|
@ -1,22 +0,0 @@
|
||||
[Unit]
|
||||
Description=Load cloud-config from /media/configdrive
|
||||
Requires=coreos-setup-environment.service
|
||||
After=coreos-setup-environment.service system-config.target
|
||||
Before=user-config.target
|
||||
|
||||
# HACK: work around ordering between config drive and ec2 metadata It is
|
||||
# possible for OpenStack style systems to provide both the metadata service
|
||||
# and config drive, to prevent the two from stomping on eachother force
|
||||
# this to run after OEM and after metadata (if it exsts). I'm doing this
|
||||
# here instead of in the ec2 service because the ec2 unit is not written
|
||||
# to disk until the OEM cloud config is evaluated and I want to make sure
|
||||
# systemd knows about the ordering as early as possible.
|
||||
# coreos-cloudinit could implement a simple lock but that cannot be used
|
||||
# until after the systemd dbus calls are made non-blocking.
|
||||
After=ec2-cloudinit.service
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=yes
|
||||
EnvironmentFile=-/etc/environment
|
||||
ExecStart=/usr/bin/coreos-cloudinit --from-configdrive=/media/configdrive
|
@ -1,11 +0,0 @@
|
||||
[Unit]
|
||||
Description=Load cloud-config from /media/configvirtfs
|
||||
Requires=coreos-setup-environment.service
|
||||
After=coreos-setup-environment.service
|
||||
Before=user-config.target
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=yes
|
||||
EnvironmentFile=-/etc/environment
|
||||
ExecStart=/usr/bin/coreos-cloudinit --from-configdrive=/media/configvirtfs
|
@ -1,349 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package initialize
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path"
|
||||
"strings"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/config"
|
||||
"github.com/coreos/coreos-cloudinit/network"
|
||||
"github.com/coreos/coreos-cloudinit/system"
|
||||
)
|
||||
|
||||
// CloudConfigFile represents a CoreOS specific configuration option that can generate
|
||||
// an associated system.File to be written to disk
|
||||
type CloudConfigFile interface {
|
||||
// File should either return (*system.File, error), or (nil, nil) if nothing
|
||||
// needs to be done for this configuration option.
|
||||
File() (*system.File, error)
|
||||
}
|
||||
|
||||
// CloudConfigUnit represents a CoreOS specific configuration option that can generate
|
||||
// associated system.Units to be created/enabled appropriately
|
||||
type CloudConfigUnit interface {
|
||||
Units() []system.Unit
|
||||
}
|
||||
|
||||
func isLock(env *Environment) bool {
|
||||
if _, err := os.Stat(path.Join(env.Workspace(), ".lock")); err != nil {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func Lock(env *Environment) error {
|
||||
if !isLock(env) {
|
||||
fp, err := os.OpenFile(path.Join(env.Workspace(), ".lock"), os.O_WRONLY|os.O_CREATE|os.O_EXCL|os.O_TRUNC, os.FileMode(0644))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return fp.Close()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Apply renders a CloudConfig to an Environment. This can involve things like
|
||||
// configuring the hostname, adding new users, writing various configuration
|
||||
// files to disk, and manipulating systemd services.
|
||||
func Apply(cfg config.CloudConfig, ifaces []network.InterfaceGenerator, env *Environment) error {
|
||||
var err error
|
||||
|
||||
for _, cmdline := range cfg.RunCMD {
|
||||
prog := strings.Fields(cmdline)[0]
|
||||
args := strings.Fields(cmdline)[1:]
|
||||
exec.Command(prog, args...).Run()
|
||||
}
|
||||
|
||||
if err = os.MkdirAll(env.Workspace(), os.FileMode(0755)); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if !isLock(env) {
|
||||
if cfg.Hostname != "" {
|
||||
if err = system.SetHostname(cfg.Hostname); err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("Set hostname to %s", cfg.Hostname)
|
||||
}
|
||||
}
|
||||
|
||||
for _, user := range cfg.Users {
|
||||
if user.Name == "" {
|
||||
log.Printf("User object has no 'name' field, skipping")
|
||||
continue
|
||||
}
|
||||
|
||||
if !isLock(env) {
|
||||
if system.UserExists(&user) {
|
||||
log.Printf("User '%s' exists, ignoring creation-time fields", user.Name)
|
||||
if user.PasswordHash != "" {
|
||||
log.Printf("Setting '%s' user's password", user.Name)
|
||||
if err := system.SetUserPassword(user.Name, user.PasswordHash); err != nil {
|
||||
log.Printf("Failed setting '%s' user's password: %v", user.Name, err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
} else {
|
||||
log.Printf("Creating user '%s'", user.Name)
|
||||
if err = system.CreateUser(&user); err != nil {
|
||||
log.Printf("Failed creating user '%s': %v", user.Name, err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if err = system.LockUnlockUser(&user); err != nil {
|
||||
log.Printf("Failed lock/unlock user '%s': %v", user.Name, err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if len(user.SSHAuthorizedKeys) > 0 {
|
||||
log.Printf("Authorizing %d SSH keys for user '%s'", len(user.SSHAuthorizedKeys), user.Name)
|
||||
if err = system.AuthorizeSSHKeys(user.Name, env.SSHKeyName(), user.SSHAuthorizedKeys); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if user.SSHImportGithubUser != "" {
|
||||
log.Printf("Authorizing github user %s SSH keys for CoreOS user '%s'", user.SSHImportGithubUser, user.Name)
|
||||
if err = SSHImportGithubUser(user.Name, user.SSHImportGithubUser); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
for _, u := range user.SSHImportGithubUsers {
|
||||
log.Printf("Authorizing github user %s SSH keys for CoreOS user '%s'", u, user.Name)
|
||||
if err = SSHImportGithubUser(user.Name, u); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if user.SSHImportURL != "" {
|
||||
log.Printf("Authorizing SSH keys for CoreOS user '%s' from '%s'", user.Name, user.SSHImportURL)
|
||||
if err = SSHImportKeysFromURL(user.Name, user.SSHImportURL); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(cfg.SSHAuthorizedKeys) > 0 {
|
||||
err = system.AuthorizeSSHKeys(cfg.SystemInfo.DefaultUser.Name, env.SSHKeyName(), cfg.SSHAuthorizedKeys)
|
||||
if err == nil {
|
||||
log.Printf("Authorized SSH keys for %s user", cfg.SystemInfo.DefaultUser.Name)
|
||||
} else {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if !isLock(env) {
|
||||
var writeFiles []system.File
|
||||
for _, file := range cfg.WriteFiles {
|
||||
writeFiles = append(writeFiles, system.File{File: file})
|
||||
}
|
||||
|
||||
for _, ccf := range []CloudConfigFile{
|
||||
system.OEM{OEM: cfg.CoreOS.OEM},
|
||||
system.Update{Update: cfg.CoreOS.Update, ReadConfig: system.DefaultReadConfig},
|
||||
system.EtcHosts{EtcHosts: cfg.ManageEtcHosts},
|
||||
system.Flannel{Flannel: cfg.CoreOS.Flannel},
|
||||
} {
|
||||
f, err := ccf.File()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if f != nil {
|
||||
writeFiles = append(writeFiles, *f)
|
||||
}
|
||||
}
|
||||
|
||||
var units []system.Unit
|
||||
for _, u := range cfg.CoreOS.Units {
|
||||
units = append(units, system.Unit{Unit: u})
|
||||
}
|
||||
|
||||
for _, ccu := range []CloudConfigUnit{
|
||||
system.Etcd{Etcd: cfg.CoreOS.Etcd},
|
||||
system.Etcd2{Etcd2: cfg.CoreOS.Etcd2},
|
||||
system.Fleet{Fleet: cfg.CoreOS.Fleet},
|
||||
system.Locksmith{Locksmith: cfg.CoreOS.Locksmith},
|
||||
system.Update{Update: cfg.CoreOS.Update, ReadConfig: system.DefaultReadConfig},
|
||||
} {
|
||||
units = append(units, ccu.Units()...)
|
||||
}
|
||||
|
||||
wroteEnvironment := false
|
||||
for _, file := range writeFiles {
|
||||
fullPath, err := system.WriteFile(&file, env.Root())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if path.Clean(file.Path) == "/etc/environment" {
|
||||
wroteEnvironment = true
|
||||
}
|
||||
log.Printf("Wrote file %s to filesystem", fullPath)
|
||||
}
|
||||
|
||||
if !wroteEnvironment {
|
||||
ef := env.DefaultEnvironmentFile()
|
||||
if ef != nil {
|
||||
err := system.WriteEnvFile(ef, env.Root())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("Updated /etc/environment")
|
||||
}
|
||||
}
|
||||
|
||||
if len(ifaces) > 0 {
|
||||
units = append(units, createNetworkingUnits(ifaces)...)
|
||||
if err = system.RestartNetwork(ifaces); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
um := system.NewUnitManager(env.Root())
|
||||
if err = processUnits(units, env.Root(), um); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if cfg.ResizeRootfs {
|
||||
log.Printf("resize root filesystem")
|
||||
if err = system.ResizeRootFS(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return Lock(env)
|
||||
}
|
||||
|
||||
func createNetworkingUnits(interfaces []network.InterfaceGenerator) (units []system.Unit) {
|
||||
appendNewUnit := func(units []system.Unit, name, content string) []system.Unit {
|
||||
if content == "" {
|
||||
return units
|
||||
}
|
||||
return append(units, system.Unit{Unit: config.Unit{
|
||||
Name: name,
|
||||
Runtime: true,
|
||||
Content: content,
|
||||
}})
|
||||
}
|
||||
for _, i := range interfaces {
|
||||
units = appendNewUnit(units, fmt.Sprintf("%s.netdev", i.Filename()), i.Netdev())
|
||||
units = appendNewUnit(units, fmt.Sprintf("%s.link", i.Filename()), i.Link())
|
||||
units = appendNewUnit(units, fmt.Sprintf("%s.network", i.Filename()), i.Network())
|
||||
}
|
||||
return units
|
||||
}
|
||||
|
||||
// processUnits takes a set of Units and applies them to the given root using
|
||||
// the given UnitManager. This can involve things like writing unit files to
|
||||
// disk, masking/unmasking units, or invoking systemd
|
||||
// commands against units. It returns any error encountered.
|
||||
func processUnits(units []system.Unit, root string, um system.UnitManager) error {
|
||||
type action struct {
|
||||
unit system.Unit
|
||||
command string
|
||||
}
|
||||
actions := make([]action, 0, len(units))
|
||||
reload := false
|
||||
restartNetworkd := false
|
||||
for _, unit := range units {
|
||||
if unit.Name == "" {
|
||||
log.Printf("Skipping unit without name")
|
||||
continue
|
||||
}
|
||||
|
||||
if unit.Content != "" {
|
||||
log.Printf("Writing unit %q to filesystem", unit.Name)
|
||||
if err := um.PlaceUnit(unit); err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("Wrote unit %q", unit.Name)
|
||||
reload = true
|
||||
}
|
||||
|
||||
for _, dropin := range unit.DropIns {
|
||||
if dropin.Name != "" && dropin.Content != "" {
|
||||
log.Printf("Writing drop-in unit %q to filesystem", dropin.Name)
|
||||
if err := um.PlaceUnitDropIn(unit, dropin); err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("Wrote drop-in unit %q", dropin.Name)
|
||||
reload = true
|
||||
}
|
||||
}
|
||||
|
||||
if unit.Mask {
|
||||
log.Printf("Masking unit file %q", unit.Name)
|
||||
if err := um.MaskUnit(unit); err != nil {
|
||||
return err
|
||||
}
|
||||
} else if unit.Runtime {
|
||||
log.Printf("Ensuring runtime unit file %q is unmasked", unit.Name)
|
||||
if err := um.UnmaskUnit(unit); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if unit.Enable {
|
||||
if unit.Group() != "network" {
|
||||
log.Printf("Enabling unit file %q", unit.Name)
|
||||
if err := um.EnableUnitFile(unit); err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("Enabled unit %q", unit.Name)
|
||||
} else {
|
||||
log.Printf("Skipping enable for network-like unit %q", unit.Name)
|
||||
}
|
||||
}
|
||||
|
||||
if unit.Group() == "network" {
|
||||
restartNetworkd = true
|
||||
} else if unit.Command != "" {
|
||||
actions = append(actions, action{unit, unit.Command})
|
||||
}
|
||||
}
|
||||
|
||||
if reload {
|
||||
if err := um.DaemonReload(); err != nil {
|
||||
return errors.New(fmt.Sprintf("failed systemd daemon-reload: %s", err))
|
||||
}
|
||||
}
|
||||
|
||||
if restartNetworkd {
|
||||
log.Printf("Restarting systemd-networkd")
|
||||
networkd := system.Unit{Unit: config.Unit{Name: "systemd-networkd.service"}}
|
||||
res, err := um.RunUnitCommand(networkd, "restart")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("Restarted systemd-networkd (%s)", res)
|
||||
}
|
||||
|
||||
for _, action := range actions {
|
||||
log.Printf("Calling unit command %q on %q'", action.command, action.unit.Name)
|
||||
res, err := um.RunUnitCommand(action.unit, action.command)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("Result of %q on %q: %s", action.command, action.unit.Name, res)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
@ -1,299 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package initialize
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/config"
|
||||
"github.com/coreos/coreos-cloudinit/network"
|
||||
"github.com/coreos/coreos-cloudinit/system"
|
||||
)
|
||||
|
||||
type TestUnitManager struct {
|
||||
placed []string
|
||||
enabled []string
|
||||
masked []string
|
||||
unmasked []string
|
||||
commands []UnitAction
|
||||
reload bool
|
||||
}
|
||||
|
||||
type UnitAction struct {
|
||||
unit string
|
||||
command string
|
||||
}
|
||||
|
||||
func (tum *TestUnitManager) PlaceUnit(u system.Unit) error {
|
||||
tum.placed = append(tum.placed, u.Name)
|
||||
return nil
|
||||
}
|
||||
func (tum *TestUnitManager) PlaceUnitDropIn(u system.Unit, d config.UnitDropIn) error {
|
||||
tum.placed = append(tum.placed, u.Name+".d/"+d.Name)
|
||||
return nil
|
||||
}
|
||||
func (tum *TestUnitManager) EnableUnitFile(u system.Unit) error {
|
||||
tum.enabled = append(tum.enabled, u.Name)
|
||||
return nil
|
||||
}
|
||||
func (tum *TestUnitManager) RunUnitCommand(u system.Unit, c string) (string, error) {
|
||||
tum.commands = append(tum.commands, UnitAction{u.Name, c})
|
||||
return "", nil
|
||||
}
|
||||
func (tum *TestUnitManager) DaemonReload() error {
|
||||
tum.reload = true
|
||||
return nil
|
||||
}
|
||||
func (tum *TestUnitManager) MaskUnit(u system.Unit) error {
|
||||
tum.masked = append(tum.masked, u.Name)
|
||||
return nil
|
||||
}
|
||||
func (tum *TestUnitManager) UnmaskUnit(u system.Unit) error {
|
||||
tum.unmasked = append(tum.unmasked, u.Name)
|
||||
return nil
|
||||
}
|
||||
|
||||
type mockInterface struct {
|
||||
name string
|
||||
filename string
|
||||
netdev string
|
||||
link string
|
||||
network string
|
||||
kind string
|
||||
modprobeParams string
|
||||
}
|
||||
|
||||
func (i mockInterface) Name() string {
|
||||
return i.name
|
||||
}
|
||||
|
||||
func (i mockInterface) Filename() string {
|
||||
return i.filename
|
||||
}
|
||||
|
||||
func (i mockInterface) Netdev() string {
|
||||
return i.netdev
|
||||
}
|
||||
|
||||
func (i mockInterface) Link() string {
|
||||
return i.link
|
||||
}
|
||||
|
||||
func (i mockInterface) Network() string {
|
||||
return i.network
|
||||
}
|
||||
|
||||
func (i mockInterface) Type() string {
|
||||
return i.kind
|
||||
}
|
||||
|
||||
func (i mockInterface) ModprobeParams() string {
|
||||
return i.modprobeParams
|
||||
}
|
||||
|
||||
func TestCreateNetworkingUnits(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
interfaces []network.InterfaceGenerator
|
||||
expect []system.Unit
|
||||
}{
|
||||
{nil, nil},
|
||||
{
|
||||
[]network.InterfaceGenerator{
|
||||
network.InterfaceGenerator(mockInterface{filename: "test"}),
|
||||
},
|
||||
nil,
|
||||
},
|
||||
{
|
||||
[]network.InterfaceGenerator{
|
||||
network.InterfaceGenerator(mockInterface{filename: "test1", netdev: "test netdev"}),
|
||||
network.InterfaceGenerator(mockInterface{filename: "test2", link: "test link"}),
|
||||
network.InterfaceGenerator(mockInterface{filename: "test3", network: "test network"}),
|
||||
},
|
||||
[]system.Unit{
|
||||
system.Unit{Unit: config.Unit{Name: "test1.netdev", Runtime: true, Content: "test netdev"}},
|
||||
system.Unit{Unit: config.Unit{Name: "test2.link", Runtime: true, Content: "test link"}},
|
||||
system.Unit{Unit: config.Unit{Name: "test3.network", Runtime: true, Content: "test network"}},
|
||||
},
|
||||
},
|
||||
{
|
||||
[]network.InterfaceGenerator{
|
||||
network.InterfaceGenerator(mockInterface{filename: "test", netdev: "test netdev", link: "test link", network: "test network"}),
|
||||
},
|
||||
[]system.Unit{
|
||||
system.Unit{Unit: config.Unit{Name: "test.netdev", Runtime: true, Content: "test netdev"}},
|
||||
system.Unit{Unit: config.Unit{Name: "test.link", Runtime: true, Content: "test link"}},
|
||||
system.Unit{Unit: config.Unit{Name: "test.network", Runtime: true, Content: "test network"}},
|
||||
},
|
||||
},
|
||||
} {
|
||||
units := createNetworkingUnits(tt.interfaces)
|
||||
if !reflect.DeepEqual(tt.expect, units) {
|
||||
t.Errorf("bad units (%+v): want %#v, got %#v", tt.interfaces, tt.expect, units)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestProcessUnits(t *testing.T) {
|
||||
tests := []struct {
|
||||
units []system.Unit
|
||||
|
||||
result TestUnitManager
|
||||
}{
|
||||
{
|
||||
units: []system.Unit{
|
||||
system.Unit{Unit: config.Unit{
|
||||
Name: "foo",
|
||||
Mask: true,
|
||||
}},
|
||||
},
|
||||
result: TestUnitManager{
|
||||
masked: []string{"foo"},
|
||||
},
|
||||
},
|
||||
{
|
||||
units: []system.Unit{
|
||||
system.Unit{Unit: config.Unit{
|
||||
Name: "baz.service",
|
||||
Content: "[Service]\nExecStart=/bin/baz",
|
||||
Command: "start",
|
||||
}},
|
||||
system.Unit{Unit: config.Unit{
|
||||
Name: "foo.network",
|
||||
Content: "[Network]\nFoo=true",
|
||||
}},
|
||||
system.Unit{Unit: config.Unit{
|
||||
Name: "bar.network",
|
||||
Content: "[Network]\nBar=true",
|
||||
}},
|
||||
},
|
||||
result: TestUnitManager{
|
||||
placed: []string{"baz.service", "foo.network", "bar.network"},
|
||||
commands: []UnitAction{
|
||||
UnitAction{"systemd-networkd.service", "restart"},
|
||||
UnitAction{"baz.service", "start"},
|
||||
},
|
||||
reload: true,
|
||||
},
|
||||
},
|
||||
{
|
||||
units: []system.Unit{
|
||||
system.Unit{Unit: config.Unit{
|
||||
Name: "baz.service",
|
||||
Content: "[Service]\nExecStart=/bin/true",
|
||||
}},
|
||||
},
|
||||
result: TestUnitManager{
|
||||
placed: []string{"baz.service"},
|
||||
reload: true,
|
||||
},
|
||||
},
|
||||
{
|
||||
units: []system.Unit{
|
||||
system.Unit{Unit: config.Unit{
|
||||
Name: "locksmithd.service",
|
||||
Runtime: true,
|
||||
}},
|
||||
},
|
||||
result: TestUnitManager{
|
||||
unmasked: []string{"locksmithd.service"},
|
||||
},
|
||||
},
|
||||
{
|
||||
units: []system.Unit{
|
||||
system.Unit{Unit: config.Unit{
|
||||
Name: "woof",
|
||||
Enable: true,
|
||||
}},
|
||||
},
|
||||
result: TestUnitManager{
|
||||
enabled: []string{"woof"},
|
||||
},
|
||||
},
|
||||
{
|
||||
units: []system.Unit{
|
||||
system.Unit{Unit: config.Unit{
|
||||
Name: "hi.service",
|
||||
Runtime: true,
|
||||
Content: "[Service]\nExecStart=/bin/echo hi",
|
||||
DropIns: []config.UnitDropIn{
|
||||
{
|
||||
Name: "lo.conf",
|
||||
Content: "[Service]\nExecStart=/bin/echo lo",
|
||||
},
|
||||
{
|
||||
Name: "bye.conf",
|
||||
Content: "[Service]\nExecStart=/bin/echo bye",
|
||||
},
|
||||
},
|
||||
}},
|
||||
},
|
||||
result: TestUnitManager{
|
||||
placed: []string{"hi.service", "hi.service.d/lo.conf", "hi.service.d/bye.conf"},
|
||||
unmasked: []string{"hi.service"},
|
||||
reload: true,
|
||||
},
|
||||
},
|
||||
{
|
||||
units: []system.Unit{
|
||||
system.Unit{Unit: config.Unit{
|
||||
DropIns: []config.UnitDropIn{
|
||||
{
|
||||
Name: "lo.conf",
|
||||
Content: "[Service]\nExecStart=/bin/echo lo",
|
||||
},
|
||||
},
|
||||
}},
|
||||
},
|
||||
result: TestUnitManager{},
|
||||
},
|
||||
{
|
||||
units: []system.Unit{
|
||||
system.Unit{Unit: config.Unit{
|
||||
Name: "hi.service",
|
||||
DropIns: []config.UnitDropIn{
|
||||
{
|
||||
Content: "[Service]\nExecStart=/bin/echo lo",
|
||||
},
|
||||
},
|
||||
}},
|
||||
},
|
||||
result: TestUnitManager{},
|
||||
},
|
||||
{
|
||||
units: []system.Unit{
|
||||
system.Unit{Unit: config.Unit{
|
||||
Name: "hi.service",
|
||||
DropIns: []config.UnitDropIn{
|
||||
{
|
||||
Name: "lo.conf",
|
||||
},
|
||||
},
|
||||
}},
|
||||
},
|
||||
result: TestUnitManager{},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
tum := &TestUnitManager{}
|
||||
if err := processUnits(tt.units, "", tum); err != nil {
|
||||
t.Errorf("bad error (%+v): want nil, got %s", tt.units, err)
|
||||
}
|
||||
if !reflect.DeepEqual(tt.result, *tum) {
|
||||
t.Errorf("bad result (%+v): want %+v, got %+v", tt.units, tt.result, tum)
|
||||
}
|
||||
}
|
||||
}
|
@ -1,116 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package initialize
|
||||
|
||||
import (
|
||||
"net"
|
||||
"os"
|
||||
"path"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/config"
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/system"
|
||||
)
|
||||
|
||||
const DefaultSSHKeyName = "coreos-cloudinit"
|
||||
|
||||
type Environment struct {
|
||||
root string
|
||||
configRoot string
|
||||
workspace string
|
||||
sshKeyName string
|
||||
substitutions map[string]string
|
||||
}
|
||||
|
||||
// TODO(jonboulle): this is getting unwieldy, should be able to simplify the interface somehow
|
||||
func NewEnvironment(root, configRoot, workspace, sshKeyName string, metadata datasource.Metadata) *Environment {
|
||||
firstNonNull := func(ip net.IP, env string) string {
|
||||
if ip == nil {
|
||||
return env
|
||||
}
|
||||
return ip.String()
|
||||
}
|
||||
substitutions := map[string]string{
|
||||
"$public_ipv4": firstNonNull(metadata.PublicIPv4, os.Getenv("COREOS_PUBLIC_IPV4")),
|
||||
"$private_ipv4": firstNonNull(metadata.PrivateIPv4, os.Getenv("COREOS_PRIVATE_IPV4")),
|
||||
"$public_ipv6": firstNonNull(metadata.PublicIPv6, os.Getenv("COREOS_PUBLIC_IPV6")),
|
||||
"$private_ipv6": firstNonNull(metadata.PrivateIPv6, os.Getenv("COREOS_PRIVATE_IPV6")),
|
||||
}
|
||||
return &Environment{root, configRoot, workspace, sshKeyName, substitutions}
|
||||
}
|
||||
|
||||
func (e *Environment) Workspace() string {
|
||||
return path.Join(e.root, e.workspace)
|
||||
}
|
||||
|
||||
func (e *Environment) Root() string {
|
||||
return e.root
|
||||
}
|
||||
|
||||
func (e *Environment) ConfigRoot() string {
|
||||
return e.configRoot
|
||||
}
|
||||
|
||||
func (e *Environment) SSHKeyName() string {
|
||||
return e.sshKeyName
|
||||
}
|
||||
|
||||
func (e *Environment) SetSSHKeyName(name string) {
|
||||
e.sshKeyName = name
|
||||
}
|
||||
|
||||
// Apply goes through the map of substitutions and replaces all instances of
|
||||
// the keys with their respective values. It supports escaping substitutions
|
||||
// with a leading '\'.
|
||||
func (e *Environment) Apply(data string) string {
|
||||
for key, val := range e.substitutions {
|
||||
matchKey := strings.Replace(key, `$`, `\$`, -1)
|
||||
replKey := strings.Replace(key, `$`, `$$`, -1)
|
||||
|
||||
// "key" -> "val"
|
||||
data = regexp.MustCompile(`([^\\]|^)`+matchKey).ReplaceAllString(data, `${1}`+val)
|
||||
// "\key" -> "key"
|
||||
data = regexp.MustCompile(`\\`+matchKey).ReplaceAllString(data, replKey)
|
||||
}
|
||||
return data
|
||||
}
|
||||
|
||||
func (e *Environment) DefaultEnvironmentFile() *system.EnvFile {
|
||||
ef := system.EnvFile{
|
||||
File: &system.File{File: config.File{
|
||||
Path: "/etc/environment",
|
||||
}},
|
||||
Vars: map[string]string{},
|
||||
}
|
||||
if ip, ok := e.substitutions["$public_ipv4"]; ok && len(ip) > 0 {
|
||||
ef.Vars["COREOS_PUBLIC_IPV4"] = ip
|
||||
}
|
||||
if ip, ok := e.substitutions["$private_ipv4"]; ok && len(ip) > 0 {
|
||||
ef.Vars["COREOS_PRIVATE_IPV4"] = ip
|
||||
}
|
||||
if ip, ok := e.substitutions["$public_ipv6"]; ok && len(ip) > 0 {
|
||||
ef.Vars["COREOS_PUBLIC_IPV6"] = ip
|
||||
}
|
||||
if ip, ok := e.substitutions["$private_ipv6"]; ok && len(ip) > 0 {
|
||||
ef.Vars["COREOS_PRIVATE_IPV6"] = ip
|
||||
}
|
||||
if len(ef.Vars) == 0 {
|
||||
return nil
|
||||
} else {
|
||||
return &ef
|
||||
}
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user