Compare commits
1 Commits
v1.3.4
...
add-spec-f
Author | SHA1 | Date | |
---|---|---|---|
|
68202f3c06 |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -1,4 +1,3 @@
|
||||
*.swp
|
||||
bin/
|
||||
coverage/
|
||||
gopath/
|
||||
pkg/
|
||||
|
17
.travis.yml
17
.travis.yml
@@ -1,17 +0,0 @@
|
||||
language: go
|
||||
sudo: false
|
||||
matrix:
|
||||
include:
|
||||
- go: 1.4
|
||||
env: TOOLS_CMD=golang.org/x/tools/cmd
|
||||
- go: 1.3
|
||||
env: TOOLS_CMD=code.google.com/p/go.tools/cmd
|
||||
- go: 1.2
|
||||
env: TOOLS_CMD=code.google.com/p/go.tools/cmd
|
||||
|
||||
install:
|
||||
- go get ${TOOLS_CMD}/cover
|
||||
- go get ${TOOLS_CMD}/vet
|
||||
|
||||
script:
|
||||
- ./test
|
@@ -1,68 +0,0 @@
|
||||
# How to Contribute
|
||||
|
||||
CoreOS projects are [Apache 2.0 licensed](LICENSE) and accept contributions via
|
||||
GitHub pull requests. This document outlines some of the conventions on
|
||||
development workflow, commit message formatting, contact points and other
|
||||
resources to make it easier to get your contribution accepted.
|
||||
|
||||
# Certificate of Origin
|
||||
|
||||
By contributing to this project you agree to the Developer Certificate of
|
||||
Origin (DCO). This document was created by the Linux Kernel community and is a
|
||||
simple statement that you, as a contributor, have the legal right to make the
|
||||
contribution. See the [DCO](DCO) file for details.
|
||||
|
||||
# Email and Chat
|
||||
|
||||
The project currently uses the general CoreOS email list and IRC channel:
|
||||
- Email: [coreos-dev](https://groups.google.com/forum/#!forum/coreos-dev)
|
||||
- IRC: #[coreos](irc://irc.freenode.org:6667/#coreos) IRC channel on freenode.org
|
||||
|
||||
## Getting Started
|
||||
|
||||
- Fork the repository on GitHub
|
||||
- Read the [README](README.md) for build and test instructions
|
||||
- Play with the project, submit bugs, submit patches!
|
||||
|
||||
## Contribution Flow
|
||||
|
||||
This is a rough outline of what a contributor's workflow looks like:
|
||||
|
||||
- Create a topic branch from where you want to base your work (usually master).
|
||||
- Make commits of logical units.
|
||||
- Make sure your commit messages are in the proper format (see below).
|
||||
- Push your changes to a topic branch in your fork of the repository.
|
||||
- Make sure the tests pass, and add any new tests as appropriate.
|
||||
- Submit a pull request to the original repository.
|
||||
|
||||
Thanks for your contributions!
|
||||
|
||||
### Format of the Commit Message
|
||||
|
||||
We follow a rough convention for commit messages that is designed to answer two
|
||||
questions: what changed and why. The subject line should feature the what and
|
||||
the body of the commit should describe the why.
|
||||
|
||||
```
|
||||
environment: write new keys in consistent order
|
||||
|
||||
Go 1.3 randomizes the ordering of keys when iterating over a map.
|
||||
Sort the keys to make this ordering consistent.
|
||||
|
||||
Fixes #38
|
||||
```
|
||||
|
||||
The format can be described more formally as follows:
|
||||
|
||||
```
|
||||
<subsystem>: <what changed>
|
||||
<BLANK LINE>
|
||||
<why this change was made>
|
||||
<BLANK LINE>
|
||||
<footer>
|
||||
```
|
||||
|
||||
The first line is the subject and should be no longer than 70 characters, the
|
||||
second line is always blank, and other lines should be wrapped at 80 characters.
|
||||
This allows the message to be easier to read on GitHub as well as in various
|
||||
git tools.
|
36
DCO
36
DCO
@@ -1,36 +0,0 @@
|
||||
Developer Certificate of Origin
|
||||
Version 1.1
|
||||
|
||||
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
|
||||
660 York Street, Suite 102,
|
||||
San Francisco, CA 94110 USA
|
||||
|
||||
Everyone is permitted to copy and distribute verbatim copies of this
|
||||
license document, but changing it is not allowed.
|
||||
|
||||
|
||||
Developer's Certificate of Origin 1.1
|
||||
|
||||
By making a contribution to this project, I certify that:
|
||||
|
||||
(a) The contribution was created in whole or in part by me and I
|
||||
have the right to submit it under the open source license
|
||||
indicated in the file; or
|
||||
|
||||
(b) The contribution is based upon previous work that, to the best
|
||||
of my knowledge, is covered under an appropriate open source
|
||||
license and I have the right under that license to submit that
|
||||
work with modifications, whether created in whole or in part
|
||||
by me, under the same open source license (unless I am
|
||||
permitted to submit under a different license), as indicated
|
||||
in the file; or
|
||||
|
||||
(c) The contribution was provided directly to me by some other
|
||||
person who certified (a), (b) or (c) and I have not modified
|
||||
it.
|
||||
|
||||
(d) I understand and agree that this project and the contribution
|
||||
are public and that a record of the contribution (including all
|
||||
personal information I submit with it, including my sign-off) is
|
||||
maintained indefinitely and may be redistributed consistent with
|
||||
this project or the open source license(s) involved.
|
@@ -1,37 +0,0 @@
|
||||
## OEM configuration
|
||||
|
||||
The `coreos.oem.*` parameters follow the [os-release spec][os-release], but have been repurposed as a way for coreos-cloudinit to know about the OEM partition on this machine. Customizing this section is only needed when generating a new OEM of CoreOS from the SDK. The fields include:
|
||||
|
||||
- **id**: Lowercase string identifying the OEM
|
||||
- **name**: Human-friendly string representing the OEM
|
||||
- **version-id**: Lowercase string identifying the version of the OEM
|
||||
- **home-url**: Link to the homepage of the provider or OEM
|
||||
- **bug-report-url**: Link to a place to file bug reports about this OEM
|
||||
|
||||
coreos-cloudinit renders these fields to `/etc/oem-release`.
|
||||
If no **id** field is provided, coreos-cloudinit will ignore this section.
|
||||
|
||||
For example, the following cloud-config document...
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
coreos:
|
||||
oem:
|
||||
id: rackspace
|
||||
name: Rackspace Cloud Servers
|
||||
version-id: 168.0.0
|
||||
home-url: https://www.rackspace.com/cloud/servers/
|
||||
bug-report-url: https://github.com/coreos/coreos-overlay
|
||||
```
|
||||
|
||||
...would be rendered to the following `/etc/oem-release`:
|
||||
|
||||
```yaml
|
||||
ID=rackspace
|
||||
NAME="Rackspace Cloud Servers"
|
||||
VERSION_ID=168.0.0
|
||||
HOME_URL="https://www.rackspace.com/cloud/servers/"
|
||||
BUG_REPORT_URL="https://github.com/coreos/coreos-overlay"
|
||||
```
|
||||
|
||||
[os-release]: http://www.freedesktop.org/software/systemd/man/os-release.html
|
@@ -1,305 +1,40 @@
|
||||
# Using Cloud-Config
|
||||
# Customize CoreOS with Cloud-Config
|
||||
|
||||
CoreOS allows you to declaratively customize various OS-level items, such as network configuration, user accounts, and systemd units. This document describes the full list of items we can configure. The `coreos-cloudinit` program uses these files as it configures the OS after startup or during runtime.
|
||||
CoreOS allows you to configure machine parameters, launch systemd units on startup and more. Only a subset of [cloud-config functionality][cloud-config] is implemented. A set of custom parameters were added to the cloud-config format that are specific to CoreOS.
|
||||
|
||||
Your cloud-config is processed during each boot. Invalid cloud-config won't be processed but will be logged in the journal. You can validate your cloud-config with the [CoreOS validator]({{site.url}}/validate) or by running `coreos-cloudinit -validate`.
|
||||
|
||||
## Configuration File
|
||||
|
||||
The file used by this system initialization program is called a "cloud-config" file. It is inspired by the [cloud-init][cloud-init] project's [cloud-config][cloud-config] file, which is "the defacto multi-distribution package that handles early initialization of a cloud instance" ([cloud-init docs][cloud-init-docs]). Because the cloud-init project includes tools which aren't used by CoreOS, only the relevant subset of its configuration items will be implemented in our cloud-config file. In addition to those, we added a few CoreOS-specific items, such as etcd configuration, OEM definition, and systemd units.
|
||||
|
||||
We've designed our implementation to allow the same cloud-config file to work across all of our supported platforms.
|
||||
|
||||
[cloud-init]: https://launchpad.net/cloud-init
|
||||
[cloud-init-docs]: http://cloudinit.readthedocs.org/en/latest/index.html
|
||||
[cloud-config]: http://cloudinit.readthedocs.org/en/latest/topics/format.html#cloud-config-data
|
||||
|
||||
### File Format
|
||||
|
||||
The cloud-config file uses the [YAML][yaml] file format, which uses whitespace and new-lines to delimit lists, associative arrays, and values.
|
||||
|
||||
A cloud-config file must contain `#cloud-config`, followed by an associative array which has zero or more of the following keys:
|
||||
|
||||
- `coreos`
|
||||
- `ssh_authorized_keys`
|
||||
- `hostname`
|
||||
- `users`
|
||||
- `write_files`
|
||||
- `manage_etc_hosts`
|
||||
|
||||
The expected values for these keys are defined in the rest of this document.
|
||||
|
||||
[yaml]: https://en.wikipedia.org/wiki/YAML
|
||||
|
||||
### Providing Cloud-Config with Config-Drive
|
||||
|
||||
CoreOS tries to conform to each platform's native method to provide user data. Each cloud provider tends to be unique, but this complexity has been abstracted by CoreOS. You can view each platform's instructions on their documentation pages. The most universal way to provide cloud-config is [via config-drive](https://github.com/coreos/coreos-cloudinit/blob/master/Documentation/config-drive.md), which attaches a read-only device to the machine, that contains your cloud-config file.
|
||||
|
||||
## Configuration Parameters
|
||||
|
||||
### coreos
|
||||
|
||||
#### etcd
|
||||
|
||||
The `coreos.etcd.*` parameters will be translated to a partial systemd unit acting as an etcd configuration file.
|
||||
If the platform environment supports the templating feature of coreos-cloudinit it is possible to automate etcd configuration with the `$private_ipv4` and `$public_ipv4` fields. For example, the following cloud-config document...
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
coreos:
|
||||
etcd:
|
||||
name: node001
|
||||
# generate a new token for each unique cluster from https://discovery.etcd.io/new
|
||||
discovery: https://discovery.etcd.io/<token>
|
||||
# multi-region and multi-cloud deployments need to use $public_ipv4
|
||||
addr: $public_ipv4:4001
|
||||
peer-addr: $private_ipv4:7001
|
||||
```
|
||||
|
||||
...will generate a systemd unit drop-in like this:
|
||||
|
||||
```yaml
|
||||
[Service]
|
||||
Environment="ETCD_NAME=node001"
|
||||
Environment="ETCD_DISCOVERY=https://discovery.etcd.io/<token>"
|
||||
Environment="ETCD_ADDR=203.0.113.29:4001"
|
||||
Environment="ETCD_PEER_ADDR=192.0.2.13:7001"
|
||||
```
|
||||
|
||||
For more information about the available configuration parameters, see the [etcd documentation][etcd-config].
|
||||
|
||||
_Note: The `$private_ipv4` and `$public_ipv4` substitution variables referenced in other documents are only supported on Amazon EC2, Google Compute Engine, OpenStack, Rackspace, DigitalOcean, and Vagrant._
|
||||
|
||||
[etcd-config]: https://github.com/coreos/etcd/blob/master/Documentation/configuration.md
|
||||
|
||||
#### fleet
|
||||
|
||||
The `coreos.fleet.*` parameters work very similarly to `coreos.etcd.*`, and allow for the configuration of fleet through environment variables. For example, the following cloud-config document...
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
coreos:
|
||||
fleet:
|
||||
public-ip: $public_ipv4
|
||||
metadata: region=us-west
|
||||
```
|
||||
|
||||
...will generate a systemd unit drop-in like this:
|
||||
|
||||
```yaml
|
||||
[Service]
|
||||
Environment="FLEET_PUBLIC_IP=203.0.113.29"
|
||||
Environment="FLEET_METADATA=region=us-west"
|
||||
```
|
||||
|
||||
For more information on fleet configuration, see the [fleet documentation][fleet-config].
|
||||
|
||||
[fleet-config]: https://github.com/coreos/fleet/blob/master/Documentation/deployment-and-configuration.md#configuration
|
||||
|
||||
#### flannel
|
||||
|
||||
The `coreos.flannel.*` parameters also work very similarly to `coreos.etcd.*`
|
||||
and `coreos.fleet.*`. They can be used to set environment variables for
|
||||
flanneld. For example, the following cloud-config...
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
coreos:
|
||||
flannel:
|
||||
etcd_prefix: /coreos.com/network2
|
||||
```
|
||||
|
||||
...will generate a systemd unit drop-in like so:
|
||||
|
||||
```
|
||||
[Service]
|
||||
Environment="FLANNELD_ETCD_PREFIX=/coreos.com/network2"
|
||||
```
|
||||
|
||||
List of flannel configuration parameters:
|
||||
|
||||
- **etcd_endpoints**: Comma separated list of etcd endpoints
|
||||
- **etcd_cafile**: Path to CA file used for TLS communication with etcd
|
||||
- **etcd_certfile**: Path to certificate file used for TLS communication with etcd
|
||||
- **etcd_keyfile**: Path to private key file used for TLS communication with etcd
|
||||
- **etcd_prefix**: Etcd prefix path to be used for flannel keys
|
||||
- **ip_masq**: Install IP masquerade rules for traffic outside of flannel subnet
|
||||
- **subnet_file**: Path to flannel subnet file to write out
|
||||
- **interface**: Interface (name or IP) that should be used for inter-host communication
|
||||
|
||||
[flannel-readme]: https://github.com/coreos/flannel/blob/master/README.md
|
||||
|
||||
#### locksmith
|
||||
|
||||
The `coreos.locksmith.*` parameters can be used to set environment variables
|
||||
for locksmith. For example, the following cloud-config...
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
coreos:
|
||||
locksmith:
|
||||
endpoint: example.com:4001
|
||||
```
|
||||
|
||||
...will generate a systemd unit drop-in like so:
|
||||
|
||||
```
|
||||
[Service]
|
||||
Environment="LOCKSMITHD_ENDPOINT=example.com:4001"
|
||||
```
|
||||
|
||||
For the complete list of locksmith configuration parameters, see the [locksmith documentation][locksmith-readme].
|
||||
|
||||
[locksmith-readme]: https://github.com/coreos/locksmith/blob/master/README.md
|
||||
|
||||
#### update
|
||||
|
||||
The `coreos.update.*` parameters manipulate settings related to how CoreOS instances are updated.
|
||||
|
||||
These fields will be written out to and replace `/etc/coreos/update.conf`. If only one of the parameters is given it will only overwrite the given field.
|
||||
The `reboot-strategy` parameter also affects the behaviour of [locksmith](https://github.com/coreos/locksmith).
|
||||
|
||||
- **reboot-strategy**: One of "reboot", "etcd-lock", "best-effort" or "off" for controlling when reboots are issued after an update is performed.
|
||||
- _reboot_: Reboot immediately after an update is applied.
|
||||
- _etcd-lock_: Reboot after first taking a distributed lock in etcd, this guarantees that only one host will reboot concurrently and that the cluster will remain available during the update.
|
||||
- _best-effort_ - If etcd is running, "etcd-lock", otherwise simply "reboot".
|
||||
- _off_ - Disable rebooting after updates are applied (not recommended).
|
||||
- **server**: The location of the [CoreUpdate][coreupdate] server which will be queried for updates. Also known as the [omaha][omaha-docs] server endpoint.
|
||||
- **group**: signifies the channel which should be used for automatic updates. This value defaults to the version of the image initially downloaded. (one of "master", "alpha", "beta", "stable")
|
||||
|
||||
[coreupdate]: https://coreos.com/products/coreupdate
|
||||
[omaha-docs]: https://coreos.com/docs/coreupdate/custom-apps/coreupdate-protocol/
|
||||
|
||||
*Note: cloudinit will only manipulate the locksmith unit file in the systemd runtime directory (`/run/systemd/system/locksmithd.service`). If any manual modifications are made to an overriding unit configuration file (e.g. `/etc/systemd/system/locksmithd.service`), cloudinit will no longer be able to control the locksmith service unit.*
|
||||
|
||||
##### Example
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
coreos:
|
||||
update:
|
||||
reboot-strategy: etcd-lock
|
||||
```
|
||||
|
||||
#### units
|
||||
|
||||
The `coreos.units.*` parameters define a list of arbitrary systemd units to start after booting. This feature is intended to help you start essential services required to mount storage and configure networking in order to join the CoreOS cluster. It is not intended to be a Chef/Puppet replacement.
|
||||
|
||||
Each item is an object with the following fields:
|
||||
|
||||
- **name**: String representing unit's name. Required.
|
||||
- **runtime**: Boolean indicating whether or not to persist the unit across reboots. This is analogous to the `--runtime` argument to `systemctl enable`. The default value is false.
|
||||
- **enable**: Boolean indicating whether or not to handle the [Install] section of the unit file. This is similar to running `systemctl enable <name>`. The default value is false.
|
||||
- **content**: Plaintext string representing entire unit file. If no value is provided, the unit is assumed to exist already.
|
||||
- **command**: Command to execute on unit: start, stop, reload, restart, try-restart, reload-or-restart, reload-or-try-restart. The default behavior is to not execute any commands.
|
||||
- **mask**: Whether to mask the unit file by symlinking it to `/dev/null` (analogous to `systemctl mask <name>`). Note that unlike `systemctl mask`, **this will destructively remove any existing unit file** located at `/etc/systemd/system/<unit>`, to ensure that the mask succeeds. The default value is false.
|
||||
- **drop-ins**: A list of unit drop-ins with the following fields:
|
||||
- **name**: String representing unit's name. Required.
|
||||
- **content**: Plaintext string representing entire file. Required.
|
||||
|
||||
|
||||
**NOTE:** The command field is ignored for all network, netdev, and link units. The systemd-networkd.service unit will be restarted in their place.
|
||||
|
||||
##### Examples
|
||||
|
||||
Write a unit to disk, automatically starting it.
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
coreos:
|
||||
units:
|
||||
- name: docker-redis.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Redis container
|
||||
Author=Me
|
||||
After=docker.service
|
||||
|
||||
[Service]
|
||||
Restart=always
|
||||
ExecStart=/usr/bin/docker start -a redis_server
|
||||
ExecStop=/usr/bin/docker stop -t 2 redis_server
|
||||
```
|
||||
|
||||
Add the DOCKER_OPTS environment variable to docker.service.
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
coreos:
|
||||
units:
|
||||
- name: docker.service
|
||||
drop-ins:
|
||||
- name: 50-insecure-registry.conf
|
||||
content: |
|
||||
[Service]
|
||||
Environment=DOCKER_OPTS='--insecure-registry="10.0.1.0/24"'
|
||||
```
|
||||
|
||||
Start the built-in `etcd` and `fleet` services:
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
coreos:
|
||||
units:
|
||||
- name: etcd.service
|
||||
command: start
|
||||
- name: fleet.service
|
||||
command: start
|
||||
```
|
||||
## Supported cloud-config Parameters
|
||||
|
||||
### ssh_authorized_keys
|
||||
|
||||
The `ssh_authorized_keys` parameter adds public SSH keys which will be authorized for the `core` user.
|
||||
Provided public SSH keys will be authorized for the `core` user.
|
||||
|
||||
The keys will be named "coreos-cloudinit" by default.
|
||||
Override this by using the `--ssh-key-name` flag when calling `coreos-cloudinit`.
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
ssh_authorized_keys:
|
||||
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h...
|
||||
```
|
||||
Override this with the `--ssh-key-name` flag when calling `coreos-cloudinit`.
|
||||
|
||||
### hostname
|
||||
|
||||
The `hostname` parameter defines the system's hostname.
|
||||
The provided value will be used to set the system's hostname.
|
||||
This is the local part of a fully-qualified domain name (i.e. `foo` in `foo.example.com`).
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
hostname: coreos1
|
||||
```
|
||||
|
||||
### users
|
||||
|
||||
The `users` parameter adds or modifies the specified list of users. Each user is an object which consists of the following fields. Each field is optional and of type string unless otherwise noted.
|
||||
Add or modify users with the `users` directive by providing a list of user objects, each consisting of the following fields.
|
||||
Each field is optional and of type string unless otherwise noted.
|
||||
All but the `passwd` and `ssh-authorized-keys` fields will be ignored if the user already exists.
|
||||
|
||||
- **name**: Required. Login name of user
|
||||
- **gecos**: GECOS comment of user
|
||||
- **passwd**: Hash of the password to use for this user
|
||||
- **homedir**: User's home directory. Defaults to /home/\<name\>
|
||||
- **no-create-home**: Boolean. Skip home directory creation.
|
||||
- **homedir**: User's home directory. Defaults to /home/<name>
|
||||
- **no-create-home**: Boolean. Skip home directory createion.
|
||||
- **primary-group**: Default group for the user. Defaults to a new group created named after the user.
|
||||
- **groups**: Add user to these additional groups
|
||||
- **no-user-group**: Boolean. Skip default group creation.
|
||||
- **ssh-authorized-keys**: List of public SSH keys to authorize for this user
|
||||
- **coreos-ssh-import-github**: Authorize SSH keys from GitHub user
|
||||
- **coreos-ssh-import-github-users**: Authorize SSH keys from a list of GitHub users
|
||||
- **coreos-ssh-import-url**: Authorize SSH keys imported from a url endpoint.
|
||||
- **system**: Create the user as a system user. No home directory will be created.
|
||||
- **no-log-init**: Boolean. Skip initialization of lastlog and faillog databases.
|
||||
- **shell**: User's login shell.
|
||||
|
||||
The following fields are not yet implemented:
|
||||
|
||||
@@ -309,133 +44,153 @@ The following fields are not yet implemented:
|
||||
- **selinux-user**: Corresponding SELinux user
|
||||
- **ssh-import-id**: Import SSH keys by ID from Launchpad.
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
##### Generating a password hash
|
||||
|
||||
users:
|
||||
- name: elroy
|
||||
passwd: $6$5s2u6/jR$un0AvWnqilcgaNB3Mkxd5yYv6mTlWfOoCYHZmfi3LDKVltj.E8XNKEcwWm...
|
||||
groups:
|
||||
- sudo
|
||||
- docker
|
||||
ssh-authorized-keys:
|
||||
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h...
|
||||
```
|
||||
Generating a safe hash is important to the security of your system. Currently with updated tools like [oclhashcat](http://hashcat.net/oclhashcat/) simplified hashes like md5crypt are trivial to crack on modern GPU hardware. You can generate a "safer" hash (read: not safe, never publish your hashes publicly) via:
|
||||
|
||||
#### Generating a password hash
|
||||
###### On Debian/Ubuntu (via the package "whois")
|
||||
mkpasswd --method=SHA-512 --rounds=4096
|
||||
|
||||
If you choose to use a password instead of an SSH key, generating a safe hash is extremely important to the security of your system. Simplified hashes like md5crypt are trivial to crack on modern GPU hardware. Here are a few ways to generate secure hashes:
|
||||
###### With OpenSSL (note: this will only make md5crypt. While better than plantext it should not be considered fully secure)
|
||||
openssl passwd -1
|
||||
|
||||
```
|
||||
# On Debian/Ubuntu (via the package "whois")
|
||||
mkpasswd --method=SHA-512 --rounds=4096
|
||||
###### With Python (change password and salt values)
|
||||
python -c "import crypt, getpass, pwd; print crypt.crypt('password', '\$6\$SALT\$')"
|
||||
|
||||
# OpenSSL (note: this will only make md5crypt. While better than plantext it should not be considered fully secure)
|
||||
openssl passwd -1
|
||||
|
||||
# Python (change password and salt values)
|
||||
python -c "import crypt, getpass, pwd; print crypt.crypt('password', '\$6\$SALT\$')"
|
||||
|
||||
# Perl (change password and salt values)
|
||||
perl -e 'print crypt("password","\$6\$SALT\$") . "\n"'
|
||||
```
|
||||
###### With Perl (change password and salt values)
|
||||
perl -e 'print crypt("password","\$6\$SALT\$") . "\n"'
|
||||
|
||||
Using a higher number of rounds will help create more secure passwords, but given enough time, password hashes can be reversed. On most RPM based distributions there is a tool called mkpasswd available in the `expect` package, but this does not handle "rounds" nor advanced hashing algorithms.
|
||||
|
||||
#### Retrieving SSH Authorized Keys
|
||||
|
||||
##### From a GitHub User
|
||||
|
||||
Using the `coreos-ssh-import-github` field, we can import public SSH keys from a GitHub user to use as authorized keys to a server.
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
users:
|
||||
- name: elroy
|
||||
coreos-ssh-import-github: elroy
|
||||
```
|
||||
|
||||
##### From an HTTP Endpoint
|
||||
|
||||
We can also pull public SSH keys from any HTTP endpoint which matches [GitHub's API response format](https://developer.github.com/v3/users/keys/#list-public-keys-for-a-user).
|
||||
For example, if you have an installation of GitHub Enterprise, you can provide a complete URL with an authentication token:
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
users:
|
||||
- name: elroy
|
||||
coreos-ssh-import-url: https://github-enterprise.example.com/api/v3/users/elroy/keys?access_token=<TOKEN>
|
||||
```
|
||||
|
||||
You can also specify any URL whose response matches the JSON format for public keys:
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
users:
|
||||
- name: elroy
|
||||
coreos-ssh-import-url: https://example.com/public-keys
|
||||
```
|
||||
|
||||
### write_files
|
||||
|
||||
The `write_files` directive defines a set of files to create on the local filesystem.
|
||||
Each item in the list may have the following keys:
|
||||
Inject an arbitrary set of files to the local filesystem.
|
||||
Provide a list of objects with the following attributes:
|
||||
|
||||
- **path**: Absolute location on disk where contents should be written
|
||||
- **content**: Data to write at the provided `path`
|
||||
- **permissions**: Integer representing file permissions, typically in octal notation (i.e. 0644)
|
||||
- **permissions**: String representing file permissions in octal notation (i.e. '0644')
|
||||
- **owner**: User and group that should own the file written to disk. This is equivalent to the `<user>:<group>` argument to `chown <user>:<group> <path>`.
|
||||
- **encoding**: Optional. The encoding of the data in content. If not specified this defaults to the yaml document encoding (usually utf-8). Supported encoding types are:
|
||||
- **b64, base64**: Base64 encoded content
|
||||
- **gz, gzip**: gzip encoded content, for use with the !!binary tag
|
||||
- **gz+b64, gz+base64, gzip+b64, gzip+base64**: Base64 encoded gzip content
|
||||
|
||||
## Custom cloud-config Parameters
|
||||
|
||||
```yaml
|
||||
### coreos.oem
|
||||
|
||||
These fields are borrowed from the [os-release spec][os-release] and repurposed
|
||||
as a way for cloud-init to know about the OEM partition on this machine.
|
||||
|
||||
- **id**: A lower case string identifying the oem.
|
||||
- **version-id**: A lower case string identifying the version of the OEM. Example: `168.0.0`
|
||||
- **name**: A name without the version that is suitable for presentation to the user.
|
||||
- **home-url**: Link to the homepage of the provider or OEM.
|
||||
- **bug-report-url***: Link to a place to file bug reports about this OEM partition.
|
||||
|
||||
cloudinit must render these fields down to an /etc/oem-release file on disk in the following format:
|
||||
|
||||
```
|
||||
NAME=Rackspace
|
||||
ID=rackspace
|
||||
VERSION_ID=168.0.0
|
||||
PRETTY_NAME="Rackspace Cloud Servers"
|
||||
HOME_URL="http://www.rackspace.com/cloud/servers/"
|
||||
BUG_REPORT_URL="https://github.com/coreos/coreos-overlay"
|
||||
```
|
||||
|
||||
[os-release]: http://www.freedesktop.org/software/systemd/man/os-release.html
|
||||
|
||||
### coreos.etcd.discovery_url
|
||||
|
||||
The value of `coreos.etcd.discovery_url` will be used to discover the instance's etcd peers using the [etcd discovery protocol][disco-proto]. Usage of the [public discovery service][disco-service] is encouraged.
|
||||
|
||||
[disco-proto]: https://github.com/coreos/etcd/blob/master/Documentation/discovery-protocol.md
|
||||
[disco-service]: http://discovery.etcd.io
|
||||
|
||||
### coreos.units
|
||||
|
||||
Arbitrary systemd units may be provided in the `coreos.units` attribute.
|
||||
`coreos.units` is a list of objects with the following fields:
|
||||
|
||||
- **name**: string representing unit's name
|
||||
- **runtime**: boolean indicating whether or not to persist the unit across reboots. This is analagous to the `--runtime` flag to `systemd enable`.
|
||||
- **content**: plaintext string representing entire unit file
|
||||
|
||||
See docker example below.
|
||||
|
||||
## user-data Script
|
||||
|
||||
Simply set your user-data to a script where the first line is a shebang:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
echo 'Hello, world!'
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Inject an SSH key, bootstrap etcd, and start fleet
|
||||
```
|
||||
#cloud-config
|
||||
|
||||
coreos:
|
||||
etcd:
|
||||
discovery_url: https://discovery.etcd.io/827c73219eeb2fa5530027c37bf18877
|
||||
fleet:
|
||||
autostart: yes
|
||||
ssh_authorized_keys:
|
||||
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h...
|
||||
```
|
||||
|
||||
### Start a docker container on boot
|
||||
|
||||
```
|
||||
#cloud-config
|
||||
|
||||
coreos:
|
||||
units:
|
||||
- name: docker-redis.service
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Redis container
|
||||
Author=Me
|
||||
After=docker.service
|
||||
|
||||
[Service]
|
||||
Restart=always
|
||||
ExecStart=/usr/bin/docker start -a redis_server
|
||||
ExecStop=/usr/bin/docker stop -t 2 redis_server
|
||||
|
||||
[Install]
|
||||
WantedBy=local.target
|
||||
```
|
||||
|
||||
### Add a user
|
||||
|
||||
```
|
||||
#cloud-config
|
||||
|
||||
users:
|
||||
- name: elroy
|
||||
passwd: $6$5s2u6/jR$un0AvWnqilcgaNB3Mkxd5yYv6mTlWfOoCYHZmfi3LDKVltj.E8XNKEcwWm...
|
||||
groups:
|
||||
- staff
|
||||
- docker
|
||||
ssh-authorized-keys:
|
||||
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h...
|
||||
```
|
||||
|
||||
### Inject configuration files
|
||||
|
||||
```
|
||||
#cloud-config
|
||||
|
||||
write_files:
|
||||
- path: /etc/hosts
|
||||
contents: |
|
||||
127.0.0.1 localhost
|
||||
192.0.2.211 buildbox
|
||||
- path: /etc/resolv.conf
|
||||
permissions: 0644
|
||||
owner: root
|
||||
content: |
|
||||
nameserver 8.8.8.8
|
||||
- path: /etc/motd
|
||||
permissions: 0644
|
||||
owner: root
|
||||
content: |
|
||||
Good news, everyone!
|
||||
- path: /tmp/like_this
|
||||
permissions: 0644
|
||||
owner: root
|
||||
encoding: gzip
|
||||
content: !!binary |
|
||||
H4sIAKgdh1QAAwtITM5WyK1USMqvUCjPLMlQSMssS1VIya9KzVPIySwszS9SyCpNLwYARQFQ5CcAAAA=
|
||||
- path: /tmp/or_like_this
|
||||
permissions: 0644
|
||||
owner: root
|
||||
encoding: gzip+base64
|
||||
content: |
|
||||
H4sIAKgdh1QAAwtITM5WyK1USMqvUCjPLMlQSMssS1VIya9KzVPIySwszS9SyCpNLwYARQFQ5CcAAAA=
|
||||
- path: /tmp/todolist
|
||||
permissions: 0644
|
||||
owner: root
|
||||
encoding: base64
|
||||
content: |
|
||||
UGFjayBteSBib3ggd2l0aCBmaXZlIGRvemVuIGxpcXVvciBqdWdz
|
||||
```
|
||||
|
||||
### manage_etc_hosts
|
||||
|
||||
The `manage_etc_hosts` parameter configures the contents of the `/etc/hosts` file, which is used for local name resolution.
|
||||
Currently, the only supported value is "localhost" which will cause your system's hostname
|
||||
to resolve to "127.0.0.1". This is helpful when the host does not have DNS
|
||||
infrastructure in place to resolve its own hostname, for example, when using Vagrant.
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
|
||||
manage_etc_hosts: localhost
|
||||
contents: |
|
||||
nameserver 192.0.2.13
|
||||
nameserver 192.0.2.14
|
||||
```
|
||||
|
@@ -1,34 +0,0 @@
|
||||
# Distribution via Config Drive
|
||||
|
||||
CoreOS supports providing configuration data via [config drive][config-drive]
|
||||
disk images. Currently only providing a single script or cloud config file is
|
||||
supported.
|
||||
|
||||
[config-drive]: http://docs.openstack.org/user-guide/content/enable_config_drive.html#config_drive_contents
|
||||
|
||||
## Contents and Format
|
||||
|
||||
The image should be a single FAT or ISO9660 file system with the label
|
||||
`config-2` and the configuration data should be located at
|
||||
`openstack/latest/user_data`.
|
||||
|
||||
For example, to wrap up a config named `user_data` in a config drive image:
|
||||
|
||||
```sh
|
||||
mkdir -p /tmp/new-drive/openstack/latest
|
||||
cp user_data /tmp/new-drive/openstack/latest/user_data
|
||||
mkisofs -R -V config-2 -o configdrive.iso /tmp/new-drive
|
||||
rm -r /tmp/new-drive
|
||||
```
|
||||
|
||||
## QEMU virtfs
|
||||
|
||||
One exception to the above, when using QEMU it is possible to skip creating an
|
||||
image and use a plain directory containing the same contents:
|
||||
|
||||
```sh
|
||||
qemu-system-x86_64 \
|
||||
-fsdev local,id=conf,security_model=none,readonly,path=/tmp/new-drive \
|
||||
-device virtio-9p-pci,fsdev=conf,mount_tag=config-2 \
|
||||
[usual qemu options here...]
|
||||
```
|
@@ -1,27 +0,0 @@
|
||||
#Debian Interfaces#
|
||||
**WARNING**: This option is EXPERIMENTAL and may change or be removed at any
|
||||
point.
|
||||
There is basic support for converting from a Debian network configuration to
|
||||
networkd unit files. The -convert-netconf=debian option is used to activate
|
||||
this feature.
|
||||
|
||||
#convert-netconf#
|
||||
Default: ""
|
||||
Read the network config provided in cloud-drive and translate it from the
|
||||
specified format into networkd unit files (requires the -from-configdrive
|
||||
flag). Currently only supports "debian" which provides support for a small
|
||||
subset of the [Debian network configuration]
|
||||
(https://wiki.debian.org/NetworkConfiguration). These options include:
|
||||
|
||||
- interface config methods
|
||||
- static
|
||||
- address/netmask
|
||||
- gateway
|
||||
- hwaddress
|
||||
- dns-nameservers
|
||||
- dhcp
|
||||
- hwaddress
|
||||
- manual
|
||||
- loopback
|
||||
- vlan_raw_device
|
||||
- bond-slaves
|
34
Godeps/Godeps.json
generated
34
Godeps/Godeps.json
generated
@@ -1,34 +0,0 @@
|
||||
{
|
||||
"ImportPath": "github.com/coreos/coreos-cloudinit",
|
||||
"GoVersion": "go1.3.3",
|
||||
"Packages": [
|
||||
"./..."
|
||||
],
|
||||
"Deps": [
|
||||
{
|
||||
"ImportPath": "github.com/cloudsigma/cepgo",
|
||||
"Rev": "1bfc4895bf5c4d3b599f3f6ee142299488c8739b"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/coreos/go-systemd/dbus",
|
||||
"Rev": "4fbc5060a317b142e6c7bfbedb65596d5f0ab99b"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/coreos/yaml",
|
||||
"Rev": "6b16a5714269b2f70720a45406b1babd947a17ef"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/dotcloud/docker/pkg/netlink",
|
||||
"Comment": "v0.11.1-359-g55d41c3e21e1",
|
||||
"Rev": "55d41c3e21e1593b944c06196ffb2ac57ab7f653"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/guelfey/go.dbus",
|
||||
"Rev": "f6a3a2366cc39b8479cadc499d3c735fb10fbdda"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/tarm/goserial",
|
||||
"Rev": "cdabc8d44e8e84f58f18074ae44337e1f2f375b9"
|
||||
}
|
||||
]
|
||||
}
|
5
Godeps/Readme
generated
5
Godeps/Readme
generated
@@ -1,5 +0,0 @@
|
||||
This directory tree is generated automatically by godep.
|
||||
|
||||
Please do not edit.
|
||||
|
||||
See https://github.com/tools/godep for more information.
|
2
Godeps/_workspace/.gitignore
generated
vendored
2
Godeps/_workspace/.gitignore
generated
vendored
@@ -1,2 +0,0 @@
|
||||
/pkg
|
||||
/bin
|
23
Godeps/_workspace/src/github.com/cloudsigma/cepgo/.gitignore
generated
vendored
23
Godeps/_workspace/src/github.com/cloudsigma/cepgo/.gitignore
generated
vendored
@@ -1,23 +0,0 @@
|
||||
# Compiled Object files, Static and Dynamic libs (Shared Objects)
|
||||
*.o
|
||||
*.a
|
||||
*.so
|
||||
|
||||
# Folders
|
||||
_obj
|
||||
_test
|
||||
|
||||
# Architecture specific extensions/prefixes
|
||||
*.[568vq]
|
||||
[568vq].out
|
||||
|
||||
*.cgo1.go
|
||||
*.cgo2.c
|
||||
_cgo_defun.c
|
||||
_cgo_gotypes.go
|
||||
_cgo_export.*
|
||||
|
||||
_testmain.go
|
||||
|
||||
*.exe
|
||||
*.test
|
202
Godeps/_workspace/src/github.com/cloudsigma/cepgo/LICENSE
generated
vendored
202
Godeps/_workspace/src/github.com/cloudsigma/cepgo/LICENSE
generated
vendored
@@ -1,202 +0,0 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright {yyyy} {name of copyright owner}
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
43
Godeps/_workspace/src/github.com/cloudsigma/cepgo/README.md
generated
vendored
43
Godeps/_workspace/src/github.com/cloudsigma/cepgo/README.md
generated
vendored
@@ -1,43 +0,0 @@
|
||||
cepgo
|
||||
=====
|
||||
|
||||
Cepko implements easy-to-use communication with CloudSigma's VMs through a
|
||||
virtual serial port without bothering with formatting the messages properly nor
|
||||
parsing the output with the specific and sometimes confusing shell tools for
|
||||
that purpose.
|
||||
|
||||
Having the server definition accessible by the VM can be useful in various
|
||||
ways. For example it is possible to easily determine from within the VM, which
|
||||
network interfaces are connected to public and which to private network.
|
||||
Another use is to pass some data to initial VM setup scripts, like setting the
|
||||
hostname to the VM name or passing ssh public keys through server meta.
|
||||
|
||||
Example usage:
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/cloudsigma/cepgo"
|
||||
)
|
||||
|
||||
func main() {
|
||||
c := cepgo.NewCepgo()
|
||||
result, err := c.Meta()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
fmt.Printf("%#v", result)
|
||||
}
|
||||
|
||||
Output:
|
||||
|
||||
map[string]interface {}{
|
||||
"optimize_for":"custom",
|
||||
"ssh_public_key":"ssh-rsa AAA...",
|
||||
"description":"[...]",
|
||||
}
|
||||
|
||||
For more information take a look at the Server Context section of CloudSigma
|
||||
API Docs: http://cloudsigma-docs.readthedocs.org/en/latest/server_context.html
|
186
Godeps/_workspace/src/github.com/cloudsigma/cepgo/cepgo.go
generated
vendored
186
Godeps/_workspace/src/github.com/cloudsigma/cepgo/cepgo.go
generated
vendored
@@ -1,186 +0,0 @@
|
||||
// Cepko implements easy-to-use communication with CloudSigma's VMs through a
|
||||
// virtual serial port without bothering with formatting the messages properly
|
||||
// nor parsing the output with the specific and sometimes confusing shell tools
|
||||
// for that purpose.
|
||||
//
|
||||
// Having the server definition accessible by the VM can be useful in various
|
||||
// ways. For example it is possible to easily determine from within the VM,
|
||||
// which network interfaces are connected to public and which to private
|
||||
// network. Another use is to pass some data to initial VM setup scripts, like
|
||||
// setting the hostname to the VM name or passing ssh public keys through
|
||||
// server meta.
|
||||
//
|
||||
// Example usage:
|
||||
//
|
||||
// package main
|
||||
//
|
||||
// import (
|
||||
// "fmt"
|
||||
//
|
||||
// "github.com/cloudsigma/cepgo"
|
||||
// )
|
||||
//
|
||||
// func main() {
|
||||
// c := cepgo.NewCepgo()
|
||||
// result, err := c.Meta()
|
||||
// if err != nil {
|
||||
// panic(err)
|
||||
// }
|
||||
// fmt.Printf("%#v", result)
|
||||
// }
|
||||
//
|
||||
// Output:
|
||||
//
|
||||
// map[string]string{
|
||||
// "optimize_for":"custom",
|
||||
// "ssh_public_key":"ssh-rsa AAA...",
|
||||
// "description":"[...]",
|
||||
// }
|
||||
//
|
||||
// For more information take a look at the Server Context section API Docs:
|
||||
// http://cloudsigma-docs.readthedocs.org/en/latest/server_context.html
|
||||
package cepgo
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"runtime"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/tarm/goserial"
|
||||
)
|
||||
|
||||
const (
|
||||
requestPattern = "<\n%s\n>"
|
||||
EOT = '\x04' // End Of Transmission
|
||||
)
|
||||
|
||||
var (
|
||||
SerialPort string = "/dev/ttyS1"
|
||||
Baud int = 115200
|
||||
)
|
||||
|
||||
// Sets the serial port. If the operating system is windows CloudSigma's server
|
||||
// context is at COM2 port, otherwise (linux, freebsd, darwin) the port is
|
||||
// being left to the default /dev/ttyS1.
|
||||
func init() {
|
||||
if runtime.GOOS == "windows" {
|
||||
SerialPort = "COM2"
|
||||
}
|
||||
}
|
||||
|
||||
// The default fetcher makes the connection to the serial port,
|
||||
// writes given query and reads until the EOT symbol.
|
||||
func fetchViaSerialPort(key string) ([]byte, error) {
|
||||
config := &serial.Config{Name: SerialPort, Baud: Baud}
|
||||
connection, err := serial.OpenPort(config)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
query := fmt.Sprintf(requestPattern, key)
|
||||
if _, err := connection.Write([]byte(query)); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
reader := bufio.NewReader(connection)
|
||||
answer, err := reader.ReadBytes(EOT)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return answer[0 : len(answer)-1], nil
|
||||
}
|
||||
|
||||
// Queries to the serial port can be executed only from instance of this type.
|
||||
// The result from each of them can be either interface{}, map[string]string or
|
||||
// a single in case of single value is returned. There is also a public metod
|
||||
// who directly calls the fetcher and returns raw []byte from the serial port.
|
||||
type Cepgo struct {
|
||||
fetcher func(string) ([]byte, error)
|
||||
}
|
||||
|
||||
// Creates a Cepgo instance with the default serial port fetcher.
|
||||
func NewCepgo() *Cepgo {
|
||||
cepgo := new(Cepgo)
|
||||
cepgo.fetcher = fetchViaSerialPort
|
||||
return cepgo
|
||||
}
|
||||
|
||||
// Creates a Cepgo instance with custom fetcher.
|
||||
func NewCepgoFetcher(fetcher func(string) ([]byte, error)) *Cepgo {
|
||||
cepgo := new(Cepgo)
|
||||
cepgo.fetcher = fetcher
|
||||
return cepgo
|
||||
}
|
||||
|
||||
// Fetches raw []byte from the serial port using directly the fetcher member.
|
||||
func (c *Cepgo) FetchRaw(key string) ([]byte, error) {
|
||||
return c.fetcher(key)
|
||||
}
|
||||
|
||||
// Fetches a single key and tries to unmarshal the result to json and returns
|
||||
// it. If the unmarshalling fails it's safe to assume the result it's just a
|
||||
// string and returns it.
|
||||
func (c *Cepgo) Key(key string) (interface{}, error) {
|
||||
var result interface{}
|
||||
|
||||
fetched, err := c.FetchRaw(key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = json.Unmarshal(fetched, &result)
|
||||
if err != nil {
|
||||
return string(fetched), nil
|
||||
}
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// Fetches all the server context. Equivalent of c.Key("")
|
||||
func (c *Cepgo) All() (interface{}, error) {
|
||||
return c.Key("")
|
||||
}
|
||||
|
||||
// Fetches only the object meta field and makes sure to return a proper
|
||||
// map[string]string
|
||||
func (c *Cepgo) Meta() (map[string]string, error) {
|
||||
rawMeta, err := c.Key("/meta/")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return typeAssertToMapOfStrings(rawMeta)
|
||||
}
|
||||
|
||||
// Fetches only the global context and makes sure to return a proper
|
||||
// map[string]string
|
||||
func (c *Cepgo) GlobalContext() (map[string]string, error) {
|
||||
rawContext, err := c.Key("/global_context/")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return typeAssertToMapOfStrings(rawContext)
|
||||
}
|
||||
|
||||
// Just a little helper function that uses type assertions in order to convert
|
||||
// a interface{} to map[string]string if this is possible.
|
||||
func typeAssertToMapOfStrings(raw interface{}) (map[string]string, error) {
|
||||
result := make(map[string]string)
|
||||
|
||||
dictionary, ok := raw.(map[string]interface{})
|
||||
if !ok {
|
||||
return nil, errors.New("Received bytes are formatted badly")
|
||||
}
|
||||
|
||||
for key, rawValue := range dictionary {
|
||||
if value, ok := rawValue.(string); ok {
|
||||
result[key] = value
|
||||
} else {
|
||||
return nil, errors.New("Server context metadata is formatted badly")
|
||||
}
|
||||
}
|
||||
return result, nil
|
||||
}
|
122
Godeps/_workspace/src/github.com/cloudsigma/cepgo/cepgo_test.go
generated
vendored
122
Godeps/_workspace/src/github.com/cloudsigma/cepgo/cepgo_test.go
generated
vendored
@@ -1,122 +0,0 @@
|
||||
package cepgo
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func fetchMock(key string) ([]byte, error) {
|
||||
context := []byte(`{
|
||||
"context": true,
|
||||
"cpu": 4000,
|
||||
"cpu_model": null,
|
||||
"cpus_instead_of_cores": false,
|
||||
"enable_numa": false,
|
||||
"global_context": {
|
||||
"some_global_key": "some_global_val"
|
||||
},
|
||||
"grantees": [],
|
||||
"hv_relaxed": false,
|
||||
"hv_tsc": false,
|
||||
"jobs": [],
|
||||
"mem": 4294967296,
|
||||
"meta": {
|
||||
"base64_fields": "cloudinit-user-data",
|
||||
"cloudinit-user-data": "I2Nsb3VkLWNvbmZpZwoKaG9zdG5hbWU6IGNvcmVvczE=",
|
||||
"ssh_public_key": "ssh-rsa AAAAB2NzaC1yc2E.../hQ5D5 john@doe"
|
||||
},
|
||||
"name": "coreos",
|
||||
"nics": [
|
||||
{
|
||||
"runtime": {
|
||||
"interface_type": "public",
|
||||
"ip_v4": {
|
||||
"uuid": "31.171.251.74"
|
||||
},
|
||||
"ip_v6": null
|
||||
},
|
||||
"vlan": null
|
||||
}
|
||||
],
|
||||
"smp": 2,
|
||||
"status": "running",
|
||||
"uuid": "20a0059b-041e-4d0c-bcc6-9b2852de48b3"
|
||||
}`)
|
||||
|
||||
if key == "" {
|
||||
return context, nil
|
||||
}
|
||||
|
||||
var marshalledContext map[string]interface{}
|
||||
|
||||
err := json.Unmarshal(context, &marshalledContext)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if key[0] == '/' {
|
||||
key = key[1:]
|
||||
}
|
||||
if key[len(key)-1] == '/' {
|
||||
key = key[:len(key)-1]
|
||||
}
|
||||
|
||||
return json.Marshal(marshalledContext[key])
|
||||
}
|
||||
|
||||
func TestAll(t *testing.T) {
|
||||
cepgo := NewCepgoFetcher(fetchMock)
|
||||
|
||||
result, err := cepgo.All()
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
for _, key := range []string{"meta", "name", "uuid", "global_context"} {
|
||||
if _, ok := result.(map[string]interface{})[key]; !ok {
|
||||
t.Errorf("%s not in all keys", key)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestKey(t *testing.T) {
|
||||
cepgo := NewCepgoFetcher(fetchMock)
|
||||
|
||||
result, err := cepgo.Key("uuid")
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
if _, ok := result.(string); !ok {
|
||||
t.Errorf("%#v\n", result)
|
||||
|
||||
t.Error("Fetching the uuid did not return a string")
|
||||
}
|
||||
}
|
||||
|
||||
func TestMeta(t *testing.T) {
|
||||
cepgo := NewCepgoFetcher(fetchMock)
|
||||
|
||||
meta, err := cepgo.Meta()
|
||||
if err != nil {
|
||||
t.Errorf("%#v\n", meta)
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
if _, ok := meta["ssh_public_key"]; !ok {
|
||||
t.Error("ssh_public_key is not in the meta")
|
||||
}
|
||||
}
|
||||
|
||||
func TestGlobalContext(t *testing.T) {
|
||||
cepgo := NewCepgoFetcher(fetchMock)
|
||||
|
||||
result, err := cepgo.GlobalContext()
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
if _, ok := result["some_global_key"]; !ok {
|
||||
t.Error("some_global_key is not in the global context")
|
||||
}
|
||||
}
|
131
Godeps/_workspace/src/github.com/coreos/yaml/README.md
generated
vendored
131
Godeps/_workspace/src/github.com/coreos/yaml/README.md
generated
vendored
@@ -1,131 +0,0 @@
|
||||
Note: This is a fork of https://github.com/go-yaml/yaml. The following README
|
||||
doesn't necessarily apply to this fork.
|
||||
|
||||
# YAML support for the Go language
|
||||
|
||||
Introduction
|
||||
------------
|
||||
|
||||
The yaml package enables Go programs to comfortably encode and decode YAML
|
||||
values. It was developed within [Canonical](https://www.canonical.com) as
|
||||
part of the [juju](https://juju.ubuntu.com) project, and is based on a
|
||||
pure Go port of the well-known [libyaml](http://pyyaml.org/wiki/LibYAML)
|
||||
C library to parse and generate YAML data quickly and reliably.
|
||||
|
||||
Compatibility
|
||||
-------------
|
||||
|
||||
The yaml package supports most of YAML 1.1 and 1.2, including support for
|
||||
anchors, tags, map merging, etc. Multi-document unmarshalling is not yet
|
||||
implemented, and base-60 floats from YAML 1.1 are purposefully not
|
||||
supported since they're a poor design and are gone in YAML 1.2.
|
||||
|
||||
Installation and usage
|
||||
----------------------
|
||||
|
||||
The import path for the package is *gopkg.in/yaml.v1*.
|
||||
|
||||
To install it, run:
|
||||
|
||||
go get gopkg.in/yaml.v1
|
||||
|
||||
API documentation
|
||||
-----------------
|
||||
|
||||
If opened in a browser, the import path itself leads to the API documentation:
|
||||
|
||||
* [https://gopkg.in/yaml.v1](https://gopkg.in/yaml.v1)
|
||||
|
||||
API stability
|
||||
-------------
|
||||
|
||||
The package API for yaml v1 will remain stable as described in [gopkg.in](https://gopkg.in).
|
||||
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
The yaml package is licensed under the LGPL with an exception that allows it to be linked statically. Please see the LICENSE file for details.
|
||||
|
||||
|
||||
Example
|
||||
-------
|
||||
|
||||
```Go
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
|
||||
"gopkg.in/yaml.v1"
|
||||
)
|
||||
|
||||
var data = `
|
||||
a: Easy!
|
||||
b:
|
||||
c: 2
|
||||
d: [3, 4]
|
||||
`
|
||||
|
||||
type T struct {
|
||||
A string
|
||||
B struct{C int; D []int ",flow"}
|
||||
}
|
||||
|
||||
func main() {
|
||||
t := T{}
|
||||
|
||||
err := yaml.Unmarshal([]byte(data), &t)
|
||||
if err != nil {
|
||||
log.Fatalf("error: %v", err)
|
||||
}
|
||||
fmt.Printf("--- t:\n%v\n\n", t)
|
||||
|
||||
d, err := yaml.Marshal(&t)
|
||||
if err != nil {
|
||||
log.Fatalf("error: %v", err)
|
||||
}
|
||||
fmt.Printf("--- t dump:\n%s\n\n", string(d))
|
||||
|
||||
m := make(map[interface{}]interface{})
|
||||
|
||||
err = yaml.Unmarshal([]byte(data), &m)
|
||||
if err != nil {
|
||||
log.Fatalf("error: %v", err)
|
||||
}
|
||||
fmt.Printf("--- m:\n%v\n\n", m)
|
||||
|
||||
d, err = yaml.Marshal(&m)
|
||||
if err != nil {
|
||||
log.Fatalf("error: %v", err)
|
||||
}
|
||||
fmt.Printf("--- m dump:\n%s\n\n", string(d))
|
||||
}
|
||||
```
|
||||
|
||||
This example will generate the following output:
|
||||
|
||||
```
|
||||
--- t:
|
||||
{Easy! {2 [3 4]}}
|
||||
|
||||
--- t dump:
|
||||
a: Easy!
|
||||
b:
|
||||
c: 2
|
||||
d: [3, 4]
|
||||
|
||||
|
||||
--- m:
|
||||
map[a:Easy! b:map[c:2 d:[3 4]]]
|
||||
|
||||
--- m dump:
|
||||
a: Easy!
|
||||
b:
|
||||
c: 2
|
||||
d:
|
||||
- 3
|
||||
- 4
|
||||
```
|
||||
|
190
Godeps/_workspace/src/github.com/coreos/yaml/resolve.go
generated
vendored
190
Godeps/_workspace/src/github.com/coreos/yaml/resolve.go
generated
vendored
@@ -1,190 +0,0 @@
|
||||
package yaml
|
||||
|
||||
import (
|
||||
"encoding/base64"
|
||||
"fmt"
|
||||
"math"
|
||||
"strconv"
|
||||
"strings"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
// TODO: merge, timestamps, base 60 floats, omap.
|
||||
|
||||
type resolveMapItem struct {
|
||||
value interface{}
|
||||
tag string
|
||||
}
|
||||
|
||||
var resolveTable = make([]byte, 256)
|
||||
var resolveMap = make(map[string]resolveMapItem)
|
||||
|
||||
func init() {
|
||||
t := resolveTable
|
||||
t[int('+')] = 'S' // Sign
|
||||
t[int('-')] = 'S'
|
||||
for _, c := range "0123456789" {
|
||||
t[int(c)] = 'D' // Digit
|
||||
}
|
||||
for _, c := range "yYnNtTfFoO~" {
|
||||
t[int(c)] = 'M' // In map
|
||||
}
|
||||
t[int('.')] = '.' // Float (potentially in map)
|
||||
|
||||
var resolveMapList = []struct {
|
||||
v interface{}
|
||||
tag string
|
||||
l []string
|
||||
}{
|
||||
{true, yaml_BOOL_TAG, []string{"y", "Y", "yes", "Yes", "YES"}},
|
||||
{true, yaml_BOOL_TAG, []string{"true", "True", "TRUE"}},
|
||||
{true, yaml_BOOL_TAG, []string{"on", "On", "ON"}},
|
||||
{false, yaml_BOOL_TAG, []string{"n", "N", "no", "No", "NO"}},
|
||||
{false, yaml_BOOL_TAG, []string{"false", "False", "FALSE"}},
|
||||
{false, yaml_BOOL_TAG, []string{"off", "Off", "OFF"}},
|
||||
{nil, yaml_NULL_TAG, []string{"", "~", "null", "Null", "NULL"}},
|
||||
{math.NaN(), yaml_FLOAT_TAG, []string{".nan", ".NaN", ".NAN"}},
|
||||
{math.Inf(+1), yaml_FLOAT_TAG, []string{".inf", ".Inf", ".INF"}},
|
||||
{math.Inf(+1), yaml_FLOAT_TAG, []string{"+.inf", "+.Inf", "+.INF"}},
|
||||
{math.Inf(-1), yaml_FLOAT_TAG, []string{"-.inf", "-.Inf", "-.INF"}},
|
||||
{"<<", yaml_MERGE_TAG, []string{"<<"}},
|
||||
}
|
||||
|
||||
m := resolveMap
|
||||
for _, item := range resolveMapList {
|
||||
for _, s := range item.l {
|
||||
m[s] = resolveMapItem{item.v, item.tag}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const longTagPrefix = "tag:yaml.org,2002:"
|
||||
|
||||
func shortTag(tag string) string {
|
||||
// TODO This can easily be made faster and produce less garbage.
|
||||
if strings.HasPrefix(tag, longTagPrefix) {
|
||||
return "!!" + tag[len(longTagPrefix):]
|
||||
}
|
||||
return tag
|
||||
}
|
||||
|
||||
func longTag(tag string) string {
|
||||
if strings.HasPrefix(tag, "!!") {
|
||||
return longTagPrefix + tag[2:]
|
||||
}
|
||||
return tag
|
||||
}
|
||||
|
||||
func resolvableTag(tag string) bool {
|
||||
switch tag {
|
||||
case "", yaml_STR_TAG, yaml_BOOL_TAG, yaml_INT_TAG, yaml_FLOAT_TAG, yaml_NULL_TAG:
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func resolve(tag string, in string) (rtag string, out interface{}) {
|
||||
if !resolvableTag(tag) {
|
||||
return tag, in
|
||||
}
|
||||
|
||||
defer func() {
|
||||
switch tag {
|
||||
case "", rtag, yaml_STR_TAG, yaml_BINARY_TAG:
|
||||
return
|
||||
}
|
||||
fail(fmt.Sprintf("cannot decode %s `%s` as a %s", shortTag(rtag), in, shortTag(tag)))
|
||||
}()
|
||||
|
||||
// Any data is accepted as a !!str or !!binary.
|
||||
// Otherwise, the prefix is enough of a hint about what it might be.
|
||||
hint := byte('N')
|
||||
if in != "" {
|
||||
hint = resolveTable[in[0]]
|
||||
}
|
||||
if hint != 0 && tag != yaml_STR_TAG && tag != yaml_BINARY_TAG {
|
||||
// Handle things we can lookup in a map.
|
||||
if item, ok := resolveMap[in]; ok {
|
||||
return item.tag, item.value
|
||||
}
|
||||
|
||||
// Base 60 floats are a bad idea, were dropped in YAML 1.2, and
|
||||
// are purposefully unsupported here. They're still quoted on
|
||||
// the way out for compatibility with other parser, though.
|
||||
|
||||
switch hint {
|
||||
case 'M':
|
||||
// We've already checked the map above.
|
||||
|
||||
case '.':
|
||||
// Not in the map, so maybe a normal float.
|
||||
floatv, err := strconv.ParseFloat(in, 64)
|
||||
if err == nil {
|
||||
return yaml_FLOAT_TAG, floatv
|
||||
}
|
||||
|
||||
case 'D', 'S':
|
||||
// Int, float, or timestamp.
|
||||
plain := strings.Replace(in, "_", "", -1)
|
||||
intv, err := strconv.ParseInt(plain, 0, 64)
|
||||
if err == nil {
|
||||
if intv == int64(int(intv)) {
|
||||
return yaml_INT_TAG, int(intv)
|
||||
} else {
|
||||
return yaml_INT_TAG, intv
|
||||
}
|
||||
}
|
||||
floatv, err := strconv.ParseFloat(plain, 64)
|
||||
if err == nil {
|
||||
return yaml_FLOAT_TAG, floatv
|
||||
}
|
||||
if strings.HasPrefix(plain, "0b") {
|
||||
intv, err := strconv.ParseInt(plain[2:], 2, 64)
|
||||
if err == nil {
|
||||
return yaml_INT_TAG, int(intv)
|
||||
}
|
||||
} else if strings.HasPrefix(plain, "-0b") {
|
||||
intv, err := strconv.ParseInt(plain[3:], 2, 64)
|
||||
if err == nil {
|
||||
return yaml_INT_TAG, -int(intv)
|
||||
}
|
||||
}
|
||||
// XXX Handle timestamps here.
|
||||
|
||||
default:
|
||||
panic("resolveTable item not yet handled: " + string(rune(hint)) + " (with " + in + ")")
|
||||
}
|
||||
}
|
||||
if tag == yaml_BINARY_TAG {
|
||||
return yaml_BINARY_TAG, in
|
||||
}
|
||||
if utf8.ValidString(in) {
|
||||
return yaml_STR_TAG, in
|
||||
}
|
||||
return yaml_BINARY_TAG, encodeBase64(in)
|
||||
}
|
||||
|
||||
// encodeBase64 encodes s as base64 that is broken up into multiple lines
|
||||
// as appropriate for the resulting length.
|
||||
func encodeBase64(s string) string {
|
||||
const lineLen = 70
|
||||
encLen := base64.StdEncoding.EncodedLen(len(s))
|
||||
lines := encLen/lineLen + 1
|
||||
buf := make([]byte, encLen*2+lines)
|
||||
in := buf[0:encLen]
|
||||
out := buf[encLen:]
|
||||
base64.StdEncoding.Encode(in, []byte(s))
|
||||
k := 0
|
||||
for i := 0; i < len(in); i += lineLen {
|
||||
j := i + lineLen
|
||||
if j > len(in) {
|
||||
j = len(in)
|
||||
}
|
||||
k += copy(out[k:], in[i:j])
|
||||
if lines > 1 {
|
||||
out[k] = '\n'
|
||||
k++
|
||||
}
|
||||
}
|
||||
return string(out[:k])
|
||||
}
|
2
Godeps/_workspace/src/github.com/dotcloud/docker/pkg/netlink/MAINTAINERS
generated
vendored
2
Godeps/_workspace/src/github.com/dotcloud/docker/pkg/netlink/MAINTAINERS
generated
vendored
@@ -1,2 +0,0 @@
|
||||
Michael Crosby <michael@crosbymichael.com> (@crosbymichael)
|
||||
Guillaume J. Charmes <guillaume@docker.com> (@creack)
|
23
Godeps/_workspace/src/github.com/dotcloud/docker/pkg/netlink/netlink.go
generated
vendored
23
Godeps/_workspace/src/github.com/dotcloud/docker/pkg/netlink/netlink.go
generated
vendored
@@ -1,23 +0,0 @@
|
||||
// Packet netlink provide access to low level Netlink sockets and messages.
|
||||
//
|
||||
// Actual implementations are in:
|
||||
// netlink_linux.go
|
||||
// netlink_darwin.go
|
||||
package netlink
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"net"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrWrongSockType = errors.New("Wrong socket type")
|
||||
ErrShortResponse = errors.New("Got short response from netlink")
|
||||
)
|
||||
|
||||
// A Route is a subnet associated with the interface to reach it.
|
||||
type Route struct {
|
||||
*net.IPNet
|
||||
Iface *net.Interface
|
||||
Default bool
|
||||
}
|
891
Godeps/_workspace/src/github.com/dotcloud/docker/pkg/netlink/netlink_linux.go
generated
vendored
891
Godeps/_workspace/src/github.com/dotcloud/docker/pkg/netlink/netlink_linux.go
generated
vendored
@@ -1,891 +0,0 @@
|
||||
// +build amd64
|
||||
|
||||
package netlink
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"math/rand"
|
||||
"net"
|
||||
"syscall"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
const (
|
||||
IFNAMSIZ = 16
|
||||
DEFAULT_CHANGE = 0xFFFFFFFF
|
||||
IFLA_INFO_KIND = 1
|
||||
IFLA_INFO_DATA = 2
|
||||
VETH_INFO_PEER = 1
|
||||
IFLA_NET_NS_FD = 28
|
||||
SIOC_BRADDBR = 0x89a0
|
||||
SIOC_BRADDIF = 0x89a2
|
||||
)
|
||||
|
||||
var nextSeqNr int
|
||||
|
||||
type ifreqHwaddr struct {
|
||||
IfrnName [16]byte
|
||||
IfruHwaddr syscall.RawSockaddr
|
||||
}
|
||||
|
||||
type ifreqIndex struct {
|
||||
IfrnName [16]byte
|
||||
IfruIndex int32
|
||||
}
|
||||
|
||||
func nativeEndian() binary.ByteOrder {
|
||||
var x uint32 = 0x01020304
|
||||
if *(*byte)(unsafe.Pointer(&x)) == 0x01 {
|
||||
return binary.BigEndian
|
||||
}
|
||||
return binary.LittleEndian
|
||||
}
|
||||
|
||||
func getSeq() int {
|
||||
nextSeqNr = nextSeqNr + 1
|
||||
return nextSeqNr
|
||||
}
|
||||
|
||||
func getIpFamily(ip net.IP) int {
|
||||
if len(ip) <= net.IPv4len {
|
||||
return syscall.AF_INET
|
||||
}
|
||||
if ip.To4() != nil {
|
||||
return syscall.AF_INET
|
||||
}
|
||||
return syscall.AF_INET6
|
||||
}
|
||||
|
||||
type NetlinkRequestData interface {
|
||||
Len() int
|
||||
ToWireFormat() []byte
|
||||
}
|
||||
|
||||
type IfInfomsg struct {
|
||||
syscall.IfInfomsg
|
||||
}
|
||||
|
||||
func newIfInfomsg(family int) *IfInfomsg {
|
||||
return &IfInfomsg{
|
||||
IfInfomsg: syscall.IfInfomsg{
|
||||
Family: uint8(family),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func newIfInfomsgChild(parent *RtAttr, family int) *IfInfomsg {
|
||||
msg := newIfInfomsg(family)
|
||||
parent.children = append(parent.children, msg)
|
||||
return msg
|
||||
}
|
||||
|
||||
func (msg *IfInfomsg) ToWireFormat() []byte {
|
||||
native := nativeEndian()
|
||||
|
||||
length := syscall.SizeofIfInfomsg
|
||||
b := make([]byte, length)
|
||||
b[0] = msg.Family
|
||||
b[1] = 0
|
||||
native.PutUint16(b[2:4], msg.Type)
|
||||
native.PutUint32(b[4:8], uint32(msg.Index))
|
||||
native.PutUint32(b[8:12], msg.Flags)
|
||||
native.PutUint32(b[12:16], msg.Change)
|
||||
return b
|
||||
}
|
||||
|
||||
func (msg *IfInfomsg) Len() int {
|
||||
return syscall.SizeofIfInfomsg
|
||||
}
|
||||
|
||||
type IfAddrmsg struct {
|
||||
syscall.IfAddrmsg
|
||||
}
|
||||
|
||||
func newIfAddrmsg(family int) *IfAddrmsg {
|
||||
return &IfAddrmsg{
|
||||
IfAddrmsg: syscall.IfAddrmsg{
|
||||
Family: uint8(family),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (msg *IfAddrmsg) ToWireFormat() []byte {
|
||||
native := nativeEndian()
|
||||
|
||||
length := syscall.SizeofIfAddrmsg
|
||||
b := make([]byte, length)
|
||||
b[0] = msg.Family
|
||||
b[1] = msg.Prefixlen
|
||||
b[2] = msg.Flags
|
||||
b[3] = msg.Scope
|
||||
native.PutUint32(b[4:8], msg.Index)
|
||||
return b
|
||||
}
|
||||
|
||||
func (msg *IfAddrmsg) Len() int {
|
||||
return syscall.SizeofIfAddrmsg
|
||||
}
|
||||
|
||||
type RtMsg struct {
|
||||
syscall.RtMsg
|
||||
}
|
||||
|
||||
func newRtMsg(family int) *RtMsg {
|
||||
return &RtMsg{
|
||||
RtMsg: syscall.RtMsg{
|
||||
Family: uint8(family),
|
||||
Table: syscall.RT_TABLE_MAIN,
|
||||
Scope: syscall.RT_SCOPE_UNIVERSE,
|
||||
Protocol: syscall.RTPROT_BOOT,
|
||||
Type: syscall.RTN_UNICAST,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (msg *RtMsg) ToWireFormat() []byte {
|
||||
native := nativeEndian()
|
||||
|
||||
length := syscall.SizeofRtMsg
|
||||
b := make([]byte, length)
|
||||
b[0] = msg.Family
|
||||
b[1] = msg.Dst_len
|
||||
b[2] = msg.Src_len
|
||||
b[3] = msg.Tos
|
||||
b[4] = msg.Table
|
||||
b[5] = msg.Protocol
|
||||
b[6] = msg.Scope
|
||||
b[7] = msg.Type
|
||||
native.PutUint32(b[8:12], msg.Flags)
|
||||
return b
|
||||
}
|
||||
|
||||
func (msg *RtMsg) Len() int {
|
||||
return syscall.SizeofRtMsg
|
||||
}
|
||||
|
||||
func rtaAlignOf(attrlen int) int {
|
||||
return (attrlen + syscall.RTA_ALIGNTO - 1) & ^(syscall.RTA_ALIGNTO - 1)
|
||||
}
|
||||
|
||||
type RtAttr struct {
|
||||
syscall.RtAttr
|
||||
Data []byte
|
||||
children []NetlinkRequestData
|
||||
}
|
||||
|
||||
func newRtAttr(attrType int, data []byte) *RtAttr {
|
||||
return &RtAttr{
|
||||
RtAttr: syscall.RtAttr{
|
||||
Type: uint16(attrType),
|
||||
},
|
||||
children: []NetlinkRequestData{},
|
||||
Data: data,
|
||||
}
|
||||
}
|
||||
|
||||
func newRtAttrChild(parent *RtAttr, attrType int, data []byte) *RtAttr {
|
||||
attr := newRtAttr(attrType, data)
|
||||
parent.children = append(parent.children, attr)
|
||||
return attr
|
||||
}
|
||||
|
||||
func (a *RtAttr) Len() int {
|
||||
l := 0
|
||||
for _, child := range a.children {
|
||||
l += child.Len() + syscall.SizeofRtAttr
|
||||
}
|
||||
if l == 0 {
|
||||
l++
|
||||
}
|
||||
return rtaAlignOf(l + len(a.Data))
|
||||
}
|
||||
|
||||
func (a *RtAttr) ToWireFormat() []byte {
|
||||
native := nativeEndian()
|
||||
|
||||
length := a.Len()
|
||||
buf := make([]byte, rtaAlignOf(length+syscall.SizeofRtAttr))
|
||||
|
||||
if a.Data != nil {
|
||||
copy(buf[4:], a.Data)
|
||||
} else {
|
||||
next := 4
|
||||
for _, child := range a.children {
|
||||
childBuf := child.ToWireFormat()
|
||||
copy(buf[next:], childBuf)
|
||||
next += rtaAlignOf(len(childBuf))
|
||||
}
|
||||
}
|
||||
|
||||
if l := uint16(rtaAlignOf(length)); l != 0 {
|
||||
native.PutUint16(buf[0:2], l+1)
|
||||
}
|
||||
native.PutUint16(buf[2:4], a.Type)
|
||||
|
||||
return buf
|
||||
}
|
||||
|
||||
type NetlinkRequest struct {
|
||||
syscall.NlMsghdr
|
||||
Data []NetlinkRequestData
|
||||
}
|
||||
|
||||
func (rr *NetlinkRequest) ToWireFormat() []byte {
|
||||
native := nativeEndian()
|
||||
|
||||
length := rr.Len
|
||||
dataBytes := make([][]byte, len(rr.Data))
|
||||
for i, data := range rr.Data {
|
||||
dataBytes[i] = data.ToWireFormat()
|
||||
length += uint32(len(dataBytes[i]))
|
||||
}
|
||||
b := make([]byte, length)
|
||||
native.PutUint32(b[0:4], length)
|
||||
native.PutUint16(b[4:6], rr.Type)
|
||||
native.PutUint16(b[6:8], rr.Flags)
|
||||
native.PutUint32(b[8:12], rr.Seq)
|
||||
native.PutUint32(b[12:16], rr.Pid)
|
||||
|
||||
next := 16
|
||||
for _, data := range dataBytes {
|
||||
copy(b[next:], data)
|
||||
next += len(data)
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
func (rr *NetlinkRequest) AddData(data NetlinkRequestData) {
|
||||
if data != nil {
|
||||
rr.Data = append(rr.Data, data)
|
||||
}
|
||||
}
|
||||
|
||||
func newNetlinkRequest(proto, flags int) *NetlinkRequest {
|
||||
return &NetlinkRequest{
|
||||
NlMsghdr: syscall.NlMsghdr{
|
||||
Len: uint32(syscall.NLMSG_HDRLEN),
|
||||
Type: uint16(proto),
|
||||
Flags: syscall.NLM_F_REQUEST | uint16(flags),
|
||||
Seq: uint32(getSeq()),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
type NetlinkSocket struct {
|
||||
fd int
|
||||
lsa syscall.SockaddrNetlink
|
||||
}
|
||||
|
||||
func getNetlinkSocket() (*NetlinkSocket, error) {
|
||||
fd, err := syscall.Socket(syscall.AF_NETLINK, syscall.SOCK_RAW, syscall.NETLINK_ROUTE)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
s := &NetlinkSocket{
|
||||
fd: fd,
|
||||
}
|
||||
s.lsa.Family = syscall.AF_NETLINK
|
||||
if err := syscall.Bind(fd, &s.lsa); err != nil {
|
||||
syscall.Close(fd)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return s, nil
|
||||
}
|
||||
|
||||
func (s *NetlinkSocket) Close() {
|
||||
syscall.Close(s.fd)
|
||||
}
|
||||
|
||||
func (s *NetlinkSocket) Send(request *NetlinkRequest) error {
|
||||
if err := syscall.Sendto(s.fd, request.ToWireFormat(), 0, &s.lsa); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *NetlinkSocket) Receive() ([]syscall.NetlinkMessage, error) {
|
||||
rb := make([]byte, syscall.Getpagesize())
|
||||
nr, _, err := syscall.Recvfrom(s.fd, rb, 0)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if nr < syscall.NLMSG_HDRLEN {
|
||||
return nil, ErrShortResponse
|
||||
}
|
||||
rb = rb[:nr]
|
||||
return syscall.ParseNetlinkMessage(rb)
|
||||
}
|
||||
|
||||
func (s *NetlinkSocket) GetPid() (uint32, error) {
|
||||
lsa, err := syscall.Getsockname(s.fd)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
switch v := lsa.(type) {
|
||||
case *syscall.SockaddrNetlink:
|
||||
return v.Pid, nil
|
||||
}
|
||||
return 0, ErrWrongSockType
|
||||
}
|
||||
|
||||
func (s *NetlinkSocket) HandleAck(seq uint32) error {
|
||||
native := nativeEndian()
|
||||
|
||||
pid, err := s.GetPid()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
done:
|
||||
for {
|
||||
msgs, err := s.Receive()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, m := range msgs {
|
||||
if m.Header.Seq != seq {
|
||||
return fmt.Errorf("Wrong Seq nr %d, expected %d", m.Header.Seq, seq)
|
||||
}
|
||||
if m.Header.Pid != pid {
|
||||
return fmt.Errorf("Wrong pid %d, expected %d", m.Header.Pid, pid)
|
||||
}
|
||||
if m.Header.Type == syscall.NLMSG_DONE {
|
||||
break done
|
||||
}
|
||||
if m.Header.Type == syscall.NLMSG_ERROR {
|
||||
error := int32(native.Uint32(m.Data[0:4]))
|
||||
if error == 0 {
|
||||
break done
|
||||
}
|
||||
return syscall.Errno(-error)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Add a new default gateway. Identical to:
|
||||
// ip route add default via $ip
|
||||
func AddDefaultGw(ip net.IP) error {
|
||||
s, err := getNetlinkSocket()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer s.Close()
|
||||
|
||||
family := getIpFamily(ip)
|
||||
|
||||
wb := newNetlinkRequest(syscall.RTM_NEWROUTE, syscall.NLM_F_CREATE|syscall.NLM_F_EXCL|syscall.NLM_F_ACK)
|
||||
|
||||
msg := newRtMsg(family)
|
||||
wb.AddData(msg)
|
||||
|
||||
var ipData []byte
|
||||
if family == syscall.AF_INET {
|
||||
ipData = ip.To4()
|
||||
} else {
|
||||
ipData = ip.To16()
|
||||
}
|
||||
|
||||
gateway := newRtAttr(syscall.RTA_GATEWAY, ipData)
|
||||
|
||||
wb.AddData(gateway)
|
||||
|
||||
if err := s.Send(wb); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return s.HandleAck(wb.Seq)
|
||||
}
|
||||
|
||||
// Bring up a particular network interface
|
||||
func NetworkLinkUp(iface *net.Interface) error {
|
||||
s, err := getNetlinkSocket()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer s.Close()
|
||||
|
||||
wb := newNetlinkRequest(syscall.RTM_NEWLINK, syscall.NLM_F_ACK)
|
||||
|
||||
msg := newIfInfomsg(syscall.AF_UNSPEC)
|
||||
msg.Change = syscall.IFF_UP
|
||||
msg.Flags = syscall.IFF_UP
|
||||
msg.Index = int32(iface.Index)
|
||||
wb.AddData(msg)
|
||||
|
||||
if err := s.Send(wb); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return s.HandleAck(wb.Seq)
|
||||
}
|
||||
|
||||
func NetworkLinkDown(iface *net.Interface) error {
|
||||
s, err := getNetlinkSocket()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer s.Close()
|
||||
|
||||
wb := newNetlinkRequest(syscall.RTM_NEWLINK, syscall.NLM_F_ACK)
|
||||
|
||||
msg := newIfInfomsg(syscall.AF_UNSPEC)
|
||||
msg.Change = syscall.IFF_UP
|
||||
msg.Flags = 0 & ^syscall.IFF_UP
|
||||
msg.Index = int32(iface.Index)
|
||||
wb.AddData(msg)
|
||||
|
||||
if err := s.Send(wb); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return s.HandleAck(wb.Seq)
|
||||
}
|
||||
|
||||
func NetworkSetMTU(iface *net.Interface, mtu int) error {
|
||||
s, err := getNetlinkSocket()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer s.Close()
|
||||
|
||||
wb := newNetlinkRequest(syscall.RTM_SETLINK, syscall.NLM_F_ACK)
|
||||
|
||||
msg := newIfInfomsg(syscall.AF_UNSPEC)
|
||||
msg.Type = syscall.RTM_SETLINK
|
||||
msg.Flags = syscall.NLM_F_REQUEST
|
||||
msg.Index = int32(iface.Index)
|
||||
msg.Change = DEFAULT_CHANGE
|
||||
wb.AddData(msg)
|
||||
|
||||
var (
|
||||
b = make([]byte, 4)
|
||||
native = nativeEndian()
|
||||
)
|
||||
native.PutUint32(b, uint32(mtu))
|
||||
|
||||
data := newRtAttr(syscall.IFLA_MTU, b)
|
||||
wb.AddData(data)
|
||||
|
||||
if err := s.Send(wb); err != nil {
|
||||
return err
|
||||
}
|
||||
return s.HandleAck(wb.Seq)
|
||||
}
|
||||
|
||||
// same as ip link set $name master $master
|
||||
func NetworkSetMaster(iface, master *net.Interface) error {
|
||||
s, err := getNetlinkSocket()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer s.Close()
|
||||
|
||||
wb := newNetlinkRequest(syscall.RTM_SETLINK, syscall.NLM_F_ACK)
|
||||
|
||||
msg := newIfInfomsg(syscall.AF_UNSPEC)
|
||||
msg.Type = syscall.RTM_SETLINK
|
||||
msg.Flags = syscall.NLM_F_REQUEST
|
||||
msg.Index = int32(iface.Index)
|
||||
msg.Change = DEFAULT_CHANGE
|
||||
wb.AddData(msg)
|
||||
|
||||
var (
|
||||
b = make([]byte, 4)
|
||||
native = nativeEndian()
|
||||
)
|
||||
native.PutUint32(b, uint32(master.Index))
|
||||
|
||||
data := newRtAttr(syscall.IFLA_MASTER, b)
|
||||
wb.AddData(data)
|
||||
|
||||
if err := s.Send(wb); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return s.HandleAck(wb.Seq)
|
||||
}
|
||||
|
||||
func NetworkSetNsPid(iface *net.Interface, nspid int) error {
|
||||
s, err := getNetlinkSocket()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer s.Close()
|
||||
|
||||
wb := newNetlinkRequest(syscall.RTM_SETLINK, syscall.NLM_F_ACK)
|
||||
|
||||
msg := newIfInfomsg(syscall.AF_UNSPEC)
|
||||
msg.Type = syscall.RTM_SETLINK
|
||||
msg.Flags = syscall.NLM_F_REQUEST
|
||||
msg.Index = int32(iface.Index)
|
||||
msg.Change = DEFAULT_CHANGE
|
||||
wb.AddData(msg)
|
||||
|
||||
var (
|
||||
b = make([]byte, 4)
|
||||
native = nativeEndian()
|
||||
)
|
||||
native.PutUint32(b, uint32(nspid))
|
||||
|
||||
data := newRtAttr(syscall.IFLA_NET_NS_PID, b)
|
||||
wb.AddData(data)
|
||||
|
||||
if err := s.Send(wb); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return s.HandleAck(wb.Seq)
|
||||
}
|
||||
|
||||
func NetworkSetNsFd(iface *net.Interface, fd int) error {
|
||||
s, err := getNetlinkSocket()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer s.Close()
|
||||
|
||||
wb := newNetlinkRequest(syscall.RTM_SETLINK, syscall.NLM_F_ACK)
|
||||
|
||||
msg := newIfInfomsg(syscall.AF_UNSPEC)
|
||||
msg.Type = syscall.RTM_SETLINK
|
||||
msg.Flags = syscall.NLM_F_REQUEST
|
||||
msg.Index = int32(iface.Index)
|
||||
msg.Change = DEFAULT_CHANGE
|
||||
wb.AddData(msg)
|
||||
|
||||
var (
|
||||
b = make([]byte, 4)
|
||||
native = nativeEndian()
|
||||
)
|
||||
native.PutUint32(b, uint32(fd))
|
||||
|
||||
data := newRtAttr(IFLA_NET_NS_FD, b)
|
||||
wb.AddData(data)
|
||||
|
||||
if err := s.Send(wb); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return s.HandleAck(wb.Seq)
|
||||
}
|
||||
|
||||
// Add an Ip address to an interface. This is identical to:
|
||||
// ip addr add $ip/$ipNet dev $iface
|
||||
func NetworkLinkAddIp(iface *net.Interface, ip net.IP, ipNet *net.IPNet) error {
|
||||
s, err := getNetlinkSocket()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer s.Close()
|
||||
|
||||
family := getIpFamily(ip)
|
||||
|
||||
wb := newNetlinkRequest(syscall.RTM_NEWADDR, syscall.NLM_F_CREATE|syscall.NLM_F_EXCL|syscall.NLM_F_ACK)
|
||||
|
||||
msg := newIfAddrmsg(family)
|
||||
msg.Index = uint32(iface.Index)
|
||||
prefixLen, _ := ipNet.Mask.Size()
|
||||
msg.Prefixlen = uint8(prefixLen)
|
||||
wb.AddData(msg)
|
||||
|
||||
var ipData []byte
|
||||
if family == syscall.AF_INET {
|
||||
ipData = ip.To4()
|
||||
} else {
|
||||
ipData = ip.To16()
|
||||
}
|
||||
|
||||
localData := newRtAttr(syscall.IFA_LOCAL, ipData)
|
||||
wb.AddData(localData)
|
||||
|
||||
addrData := newRtAttr(syscall.IFA_ADDRESS, ipData)
|
||||
wb.AddData(addrData)
|
||||
|
||||
if err := s.Send(wb); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return s.HandleAck(wb.Seq)
|
||||
}
|
||||
|
||||
func zeroTerminated(s string) []byte {
|
||||
return []byte(s + "\000")
|
||||
}
|
||||
|
||||
func nonZeroTerminated(s string) []byte {
|
||||
return []byte(s)
|
||||
}
|
||||
|
||||
// Add a new network link of a specified type. This is identical to
|
||||
// running: ip add link $name type $linkType
|
||||
func NetworkLinkAdd(name string, linkType string) error {
|
||||
s, err := getNetlinkSocket()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer s.Close()
|
||||
|
||||
wb := newNetlinkRequest(syscall.RTM_NEWLINK, syscall.NLM_F_CREATE|syscall.NLM_F_EXCL|syscall.NLM_F_ACK)
|
||||
|
||||
msg := newIfInfomsg(syscall.AF_UNSPEC)
|
||||
wb.AddData(msg)
|
||||
|
||||
if name != "" {
|
||||
nameData := newRtAttr(syscall.IFLA_IFNAME, zeroTerminated(name))
|
||||
wb.AddData(nameData)
|
||||
}
|
||||
|
||||
kindData := newRtAttr(IFLA_INFO_KIND, nonZeroTerminated(linkType))
|
||||
|
||||
infoData := newRtAttr(syscall.IFLA_LINKINFO, kindData.ToWireFormat())
|
||||
wb.AddData(infoData)
|
||||
|
||||
if err := s.Send(wb); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return s.HandleAck(wb.Seq)
|
||||
}
|
||||
|
||||
// Returns an array of IPNet for all the currently routed subnets on ipv4
|
||||
// This is similar to the first column of "ip route" output
|
||||
func NetworkGetRoutes() ([]Route, error) {
|
||||
native := nativeEndian()
|
||||
|
||||
s, err := getNetlinkSocket()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer s.Close()
|
||||
|
||||
wb := newNetlinkRequest(syscall.RTM_GETROUTE, syscall.NLM_F_DUMP)
|
||||
|
||||
msg := newIfInfomsg(syscall.AF_UNSPEC)
|
||||
wb.AddData(msg)
|
||||
|
||||
if err := s.Send(wb); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
pid, err := s.GetPid()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
res := make([]Route, 0)
|
||||
|
||||
done:
|
||||
for {
|
||||
msgs, err := s.Receive()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for _, m := range msgs {
|
||||
if m.Header.Seq != wb.Seq {
|
||||
return nil, fmt.Errorf("Wrong Seq nr %d, expected 1", m.Header.Seq)
|
||||
}
|
||||
if m.Header.Pid != pid {
|
||||
return nil, fmt.Errorf("Wrong pid %d, expected %d", m.Header.Pid, pid)
|
||||
}
|
||||
if m.Header.Type == syscall.NLMSG_DONE {
|
||||
break done
|
||||
}
|
||||
if m.Header.Type == syscall.NLMSG_ERROR {
|
||||
error := int32(native.Uint32(m.Data[0:4]))
|
||||
if error == 0 {
|
||||
break done
|
||||
}
|
||||
return nil, syscall.Errno(-error)
|
||||
}
|
||||
if m.Header.Type != syscall.RTM_NEWROUTE {
|
||||
continue
|
||||
}
|
||||
|
||||
var r Route
|
||||
|
||||
msg := (*RtMsg)(unsafe.Pointer(&m.Data[0:syscall.SizeofRtMsg][0]))
|
||||
|
||||
if msg.Flags&syscall.RTM_F_CLONED != 0 {
|
||||
// Ignore cloned routes
|
||||
continue
|
||||
}
|
||||
|
||||
if msg.Table != syscall.RT_TABLE_MAIN {
|
||||
// Ignore non-main tables
|
||||
continue
|
||||
}
|
||||
|
||||
if msg.Family != syscall.AF_INET {
|
||||
// Ignore non-ipv4 routes
|
||||
continue
|
||||
}
|
||||
|
||||
if msg.Dst_len == 0 {
|
||||
// Default routes
|
||||
r.Default = true
|
||||
}
|
||||
|
||||
attrs, err := syscall.ParseNetlinkRouteAttr(&m)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for _, attr := range attrs {
|
||||
switch attr.Attr.Type {
|
||||
case syscall.RTA_DST:
|
||||
ip := attr.Value
|
||||
r.IPNet = &net.IPNet{
|
||||
IP: ip,
|
||||
Mask: net.CIDRMask(int(msg.Dst_len), 8*len(ip)),
|
||||
}
|
||||
case syscall.RTA_OIF:
|
||||
index := int(native.Uint32(attr.Value[0:4]))
|
||||
r.Iface, _ = net.InterfaceByIndex(index)
|
||||
}
|
||||
}
|
||||
if r.Default || r.IPNet != nil {
|
||||
res = append(res, r)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return res, nil
|
||||
}
|
||||
|
||||
func getIfSocket() (fd int, err error) {
|
||||
for _, socket := range []int{
|
||||
syscall.AF_INET,
|
||||
syscall.AF_PACKET,
|
||||
syscall.AF_INET6,
|
||||
} {
|
||||
if fd, err = syscall.Socket(socket, syscall.SOCK_DGRAM, 0); err == nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
if err == nil {
|
||||
return fd, nil
|
||||
}
|
||||
return -1, err
|
||||
}
|
||||
|
||||
func NetworkChangeName(iface *net.Interface, newName string) error {
|
||||
fd, err := getIfSocket()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer syscall.Close(fd)
|
||||
|
||||
data := [IFNAMSIZ * 2]byte{}
|
||||
// the "-1"s here are very important for ensuring we get proper null
|
||||
// termination of our new C strings
|
||||
copy(data[:IFNAMSIZ-1], iface.Name)
|
||||
copy(data[IFNAMSIZ:IFNAMSIZ*2-1], newName)
|
||||
|
||||
if _, _, errno := syscall.Syscall(syscall.SYS_IOCTL, uintptr(fd), syscall.SIOCSIFNAME, uintptr(unsafe.Pointer(&data[0]))); errno != 0 {
|
||||
return errno
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func NetworkCreateVethPair(name1, name2 string) error {
|
||||
s, err := getNetlinkSocket()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer s.Close()
|
||||
|
||||
wb := newNetlinkRequest(syscall.RTM_NEWLINK, syscall.NLM_F_CREATE|syscall.NLM_F_EXCL|syscall.NLM_F_ACK)
|
||||
|
||||
msg := newIfInfomsg(syscall.AF_UNSPEC)
|
||||
wb.AddData(msg)
|
||||
|
||||
nameData := newRtAttr(syscall.IFLA_IFNAME, zeroTerminated(name1))
|
||||
wb.AddData(nameData)
|
||||
|
||||
nest1 := newRtAttr(syscall.IFLA_LINKINFO, nil)
|
||||
newRtAttrChild(nest1, IFLA_INFO_KIND, zeroTerminated("veth"))
|
||||
nest2 := newRtAttrChild(nest1, IFLA_INFO_DATA, nil)
|
||||
nest3 := newRtAttrChild(nest2, VETH_INFO_PEER, nil)
|
||||
|
||||
newIfInfomsgChild(nest3, syscall.AF_UNSPEC)
|
||||
newRtAttrChild(nest3, syscall.IFLA_IFNAME, zeroTerminated(name2))
|
||||
|
||||
wb.AddData(nest1)
|
||||
|
||||
if err := s.Send(wb); err != nil {
|
||||
return err
|
||||
}
|
||||
return s.HandleAck(wb.Seq)
|
||||
}
|
||||
|
||||
// Create the actual bridge device. This is more backward-compatible than
|
||||
// netlink.NetworkLinkAdd and works on RHEL 6.
|
||||
func CreateBridge(name string, setMacAddr bool) error {
|
||||
s, err := syscall.Socket(syscall.AF_INET6, syscall.SOCK_STREAM, syscall.IPPROTO_IP)
|
||||
if err != nil {
|
||||
// ipv6 issue, creating with ipv4
|
||||
s, err = syscall.Socket(syscall.AF_INET, syscall.SOCK_STREAM, syscall.IPPROTO_IP)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
defer syscall.Close(s)
|
||||
|
||||
nameBytePtr, err := syscall.BytePtrFromString(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if _, _, err := syscall.Syscall(syscall.SYS_IOCTL, uintptr(s), SIOC_BRADDBR, uintptr(unsafe.Pointer(nameBytePtr))); err != 0 {
|
||||
return err
|
||||
}
|
||||
if setMacAddr {
|
||||
return setBridgeMacAddress(s, name)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Add a slave to abridge device. This is more backward-compatible than
|
||||
// netlink.NetworkSetMaster and works on RHEL 6.
|
||||
func AddToBridge(iface, master *net.Interface) error {
|
||||
s, err := syscall.Socket(syscall.AF_INET6, syscall.SOCK_STREAM, syscall.IPPROTO_IP)
|
||||
if err != nil {
|
||||
// ipv6 issue, creating with ipv4
|
||||
s, err = syscall.Socket(syscall.AF_INET, syscall.SOCK_STREAM, syscall.IPPROTO_IP)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
defer syscall.Close(s)
|
||||
|
||||
ifr := ifreqIndex{}
|
||||
copy(ifr.IfrnName[:], master.Name)
|
||||
ifr.IfruIndex = int32(iface.Index)
|
||||
|
||||
if _, _, err := syscall.Syscall(syscall.SYS_IOCTL, uintptr(s), SIOC_BRADDIF, uintptr(unsafe.Pointer(&ifr))); err != 0 {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func setBridgeMacAddress(s int, name string) error {
|
||||
ifr := ifreqHwaddr{}
|
||||
ifr.IfruHwaddr.Family = syscall.ARPHRD_ETHER
|
||||
copy(ifr.IfrnName[:], name)
|
||||
|
||||
for i := 0; i < 6; i++ {
|
||||
ifr.IfruHwaddr.Data[i] = int8(rand.Intn(255))
|
||||
}
|
||||
|
||||
ifr.IfruHwaddr.Data[0] &^= 0x1 // clear multicast bit
|
||||
ifr.IfruHwaddr.Data[0] |= 0x2 // set local assignment bit (IEEE802)
|
||||
|
||||
if _, _, err := syscall.Syscall(syscall.SYS_IOCTL, uintptr(s), syscall.SIOCSIFHWADDR, uintptr(unsafe.Pointer(&ifr))); err != 0 {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
69
Godeps/_workspace/src/github.com/dotcloud/docker/pkg/netlink/netlink_unsupported.go
generated
vendored
69
Godeps/_workspace/src/github.com/dotcloud/docker/pkg/netlink/netlink_unsupported.go
generated
vendored
@@ -1,69 +0,0 @@
|
||||
// +build !linux !amd64
|
||||
|
||||
package netlink
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"net"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrNotImplemented = errors.New("not implemented")
|
||||
)
|
||||
|
||||
func NetworkGetRoutes() ([]Route, error) {
|
||||
return nil, ErrNotImplemented
|
||||
}
|
||||
|
||||
func NetworkLinkAdd(name string, linkType string) error {
|
||||
return ErrNotImplemented
|
||||
}
|
||||
|
||||
func NetworkLinkUp(iface *net.Interface) error {
|
||||
return ErrNotImplemented
|
||||
}
|
||||
|
||||
func NetworkLinkAddIp(iface *net.Interface, ip net.IP, ipNet *net.IPNet) error {
|
||||
return ErrNotImplemented
|
||||
}
|
||||
|
||||
func AddDefaultGw(ip net.IP) error {
|
||||
return ErrNotImplemented
|
||||
|
||||
}
|
||||
|
||||
func NetworkSetMTU(iface *net.Interface, mtu int) error {
|
||||
return ErrNotImplemented
|
||||
}
|
||||
|
||||
func NetworkCreateVethPair(name1, name2 string) error {
|
||||
return ErrNotImplemented
|
||||
}
|
||||
|
||||
func NetworkChangeName(iface *net.Interface, newName string) error {
|
||||
return ErrNotImplemented
|
||||
}
|
||||
|
||||
func NetworkSetNsFd(iface *net.Interface, fd int) error {
|
||||
return ErrNotImplemented
|
||||
}
|
||||
|
||||
func NetworkSetNsPid(iface *net.Interface, nspid int) error {
|
||||
return ErrNotImplemented
|
||||
}
|
||||
|
||||
func NetworkSetMaster(iface, master *net.Interface) error {
|
||||
return ErrNotImplemented
|
||||
}
|
||||
|
||||
func NetworkLinkDown(iface *net.Interface) error {
|
||||
return ErrNotImplemented
|
||||
}
|
||||
|
||||
func CreateBridge(name string, setMacAddr bool) error {
|
||||
return ErrNotImplemented
|
||||
}
|
||||
|
||||
func AddToBridge(iface, master *net.Interface) error {
|
||||
return ErrNotImplemented
|
||||
}
|
27
Godeps/_workspace/src/github.com/tarm/goserial/LICENSE
generated
vendored
27
Godeps/_workspace/src/github.com/tarm/goserial/LICENSE
generated
vendored
@@ -1,27 +0,0 @@
|
||||
Copyright (c) 2009 The Go Authors. All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above
|
||||
copyright notice, this list of conditions and the following disclaimer
|
||||
in the documentation and/or other materials provided with the
|
||||
distribution.
|
||||
* Neither the name of Google Inc. nor the names of its
|
||||
contributors may be used to endorse or promote products derived from
|
||||
this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
63
Godeps/_workspace/src/github.com/tarm/goserial/README.md
generated
vendored
63
Godeps/_workspace/src/github.com/tarm/goserial/README.md
generated
vendored
@@ -1,63 +0,0 @@
|
||||
GoSerial
|
||||
========
|
||||
A simple go package to allow you to read and write from the
|
||||
serial port as a stream of bytes.
|
||||
|
||||
Details
|
||||
-------
|
||||
It aims to have the same API on all platforms, including windows. As
|
||||
an added bonus, the windows package does not use cgo, so you can cross
|
||||
compile for windows from another platform. Unfortunately goinstall
|
||||
does not currently let you cross compile so you will have to do it
|
||||
manually:
|
||||
|
||||
GOOS=windows make clean install
|
||||
|
||||
Currently there is very little in the way of configurability. You can
|
||||
set the baud rate. Then you can Read(), Write(), or Close() the
|
||||
connection. Read() will block until at least one byte is returned.
|
||||
Write is the same. There is currently no exposed way to set the
|
||||
timeouts, though patches are welcome.
|
||||
|
||||
Currently all ports are opened with 8 data bits, 1 stop bit, no
|
||||
parity, no hardware flow control, and no software flow control. This
|
||||
works fine for many real devices and many faux serial devices
|
||||
including usb-to-serial converters and bluetooth serial ports.
|
||||
|
||||
You may Read() and Write() simulantiously on the same connection (from
|
||||
different goroutines).
|
||||
|
||||
Usage
|
||||
-----
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"github.com/tarm/goserial"
|
||||
"log"
|
||||
)
|
||||
|
||||
func main() {
|
||||
c := &serial.Config{Name: "COM45", Baud: 115200}
|
||||
s, err := serial.OpenPort(c)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
n, err := s.Write([]byte("test"))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
buf := make([]byte, 128)
|
||||
n, err = s.Read(buf)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
log.Print("%q", buf[:n])
|
||||
}
|
||||
```
|
||||
|
||||
Possible Future Work
|
||||
--------------------
|
||||
- better tests (loopback etc)
|
61
Godeps/_workspace/src/github.com/tarm/goserial/basic_test.go
generated
vendored
61
Godeps/_workspace/src/github.com/tarm/goserial/basic_test.go
generated
vendored
@@ -1,61 +0,0 @@
|
||||
package serial
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestConnection(t *testing.T) {
|
||||
c0 := &Config{Name: "/dev/ttyUSB0", Baud: 115200}
|
||||
c1 := &Config{Name: "/dev/ttyUSB1", Baud: 115200}
|
||||
|
||||
s1, err := OpenPort(c0)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
s2, err := OpenPort(c1)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
ch := make(chan int, 1)
|
||||
go func() {
|
||||
buf := make([]byte, 128)
|
||||
var readCount int
|
||||
for {
|
||||
n, err := s2.Read(buf)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
readCount++
|
||||
t.Logf("Read %v %v bytes: % 02x %s", readCount, n, buf[:n], buf[:n])
|
||||
select {
|
||||
case <-ch:
|
||||
ch <- readCount
|
||||
close(ch)
|
||||
default:
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
if _, err = s1.Write([]byte("hello")); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err = s1.Write([]byte(" ")); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
time.Sleep(time.Second)
|
||||
if _, err = s1.Write([]byte("world")); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
time.Sleep(time.Second / 10)
|
||||
|
||||
ch <- 0
|
||||
s1.Write([]byte(" ")) // We could be blocked in the read without this
|
||||
c := <-ch
|
||||
exp := 5
|
||||
if c >= exp {
|
||||
t.Fatalf("Expected less than %v read, got %v", exp, c)
|
||||
}
|
||||
}
|
99
Godeps/_workspace/src/github.com/tarm/goserial/serial.go
generated
vendored
99
Godeps/_workspace/src/github.com/tarm/goserial/serial.go
generated
vendored
@@ -1,99 +0,0 @@
|
||||
/*
|
||||
Goserial is a simple go package to allow you to read and write from
|
||||
the serial port as a stream of bytes.
|
||||
|
||||
It aims to have the same API on all platforms, including windows. As
|
||||
an added bonus, the windows package does not use cgo, so you can cross
|
||||
compile for windows from another platform. Unfortunately goinstall
|
||||
does not currently let you cross compile so you will have to do it
|
||||
manually:
|
||||
|
||||
GOOS=windows make clean install
|
||||
|
||||
Currently there is very little in the way of configurability. You can
|
||||
set the baud rate. Then you can Read(), Write(), or Close() the
|
||||
connection. Read() will block until at least one byte is returned.
|
||||
Write is the same. There is currently no exposed way to set the
|
||||
timeouts, though patches are welcome.
|
||||
|
||||
Currently all ports are opened with 8 data bits, 1 stop bit, no
|
||||
parity, no hardware flow control, and no software flow control. This
|
||||
works fine for many real devices and many faux serial devices
|
||||
including usb-to-serial converters and bluetooth serial ports.
|
||||
|
||||
You may Read() and Write() simulantiously on the same connection (from
|
||||
different goroutines).
|
||||
|
||||
Example usage:
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"github.com/tarm/goserial"
|
||||
"log"
|
||||
)
|
||||
|
||||
func main() {
|
||||
c := &serial.Config{Name: "COM5", Baud: 115200}
|
||||
s, err := serial.OpenPort(c)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
n, err := s.Write([]byte("test"))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
buf := make([]byte, 128)
|
||||
n, err = s.Read(buf)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
log.Print("%q", buf[:n])
|
||||
}
|
||||
*/
|
||||
package serial
|
||||
|
||||
import "io"
|
||||
|
||||
// Config contains the information needed to open a serial port.
|
||||
//
|
||||
// Currently few options are implemented, but more may be added in the
|
||||
// future (patches welcome), so it is recommended that you create a
|
||||
// new config addressing the fields by name rather than by order.
|
||||
//
|
||||
// For example:
|
||||
//
|
||||
// c0 := &serial.Config{Name: "COM45", Baud: 115200}
|
||||
// or
|
||||
// c1 := new(serial.Config)
|
||||
// c1.Name = "/dev/tty.usbserial"
|
||||
// c1.Baud = 115200
|
||||
//
|
||||
type Config struct {
|
||||
Name string
|
||||
Baud int
|
||||
|
||||
// Size int // 0 get translated to 8
|
||||
// Parity SomeNewTypeToGetCorrectDefaultOf_None
|
||||
// StopBits SomeNewTypeToGetCorrectDefaultOf_1
|
||||
|
||||
// RTSFlowControl bool
|
||||
// DTRFlowControl bool
|
||||
// XONFlowControl bool
|
||||
|
||||
// CRLFTranslate bool
|
||||
// TimeoutStuff int
|
||||
}
|
||||
|
||||
// OpenPort opens a serial port with the specified configuration
|
||||
func OpenPort(c *Config) (io.ReadWriteCloser, error) {
|
||||
return openPort(c.Name, c.Baud)
|
||||
}
|
||||
|
||||
// func Flush()
|
||||
|
||||
// func SendBreak()
|
||||
|
||||
// func RegisterBreakHandler(func())
|
90
Godeps/_workspace/src/github.com/tarm/goserial/serial_linux.go
generated
vendored
90
Godeps/_workspace/src/github.com/tarm/goserial/serial_linux.go
generated
vendored
@@ -1,90 +0,0 @@
|
||||
// +build linux,!cgo
|
||||
|
||||
package serial
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
"syscall"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
func openPort(name string, baud int) (rwc io.ReadWriteCloser, err error) {
|
||||
|
||||
var bauds = map[int]uint32{
|
||||
50: syscall.B50,
|
||||
75: syscall.B75,
|
||||
110: syscall.B110,
|
||||
134: syscall.B134,
|
||||
150: syscall.B150,
|
||||
200: syscall.B200,
|
||||
300: syscall.B300,
|
||||
600: syscall.B600,
|
||||
1200: syscall.B1200,
|
||||
1800: syscall.B1800,
|
||||
2400: syscall.B2400,
|
||||
4800: syscall.B4800,
|
||||
9600: syscall.B9600,
|
||||
19200: syscall.B19200,
|
||||
38400: syscall.B38400,
|
||||
57600: syscall.B57600,
|
||||
115200: syscall.B115200,
|
||||
230400: syscall.B230400,
|
||||
460800: syscall.B460800,
|
||||
500000: syscall.B500000,
|
||||
576000: syscall.B576000,
|
||||
921600: syscall.B921600,
|
||||
1000000: syscall.B1000000,
|
||||
1152000: syscall.B1152000,
|
||||
1500000: syscall.B1500000,
|
||||
2000000: syscall.B2000000,
|
||||
2500000: syscall.B2500000,
|
||||
3000000: syscall.B3000000,
|
||||
3500000: syscall.B3500000,
|
||||
4000000: syscall.B4000000,
|
||||
}
|
||||
|
||||
rate := bauds[baud]
|
||||
|
||||
if rate == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
f, err := os.OpenFile(name, syscall.O_RDWR|syscall.O_NOCTTY|syscall.O_NONBLOCK, 0666)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
defer func() {
|
||||
if err != nil && f != nil {
|
||||
f.Close()
|
||||
}
|
||||
}()
|
||||
|
||||
fd := f.Fd()
|
||||
t := syscall.Termios{
|
||||
Iflag: syscall.IGNPAR,
|
||||
Cflag: syscall.CS8 | syscall.CREAD | syscall.CLOCAL | rate,
|
||||
Cc: [32]uint8{syscall.VMIN: 1},
|
||||
Ispeed: rate,
|
||||
Ospeed: rate,
|
||||
}
|
||||
|
||||
if _, _, errno := syscall.Syscall6(
|
||||
syscall.SYS_IOCTL,
|
||||
uintptr(fd),
|
||||
uintptr(syscall.TCSETS),
|
||||
uintptr(unsafe.Pointer(&t)),
|
||||
0,
|
||||
0,
|
||||
0,
|
||||
); errno != 0 {
|
||||
return nil, errno
|
||||
}
|
||||
|
||||
if err = syscall.SetNonblock(int(fd), false); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
return f, nil
|
||||
}
|
107
Godeps/_workspace/src/github.com/tarm/goserial/serial_posix.go
generated
vendored
107
Godeps/_workspace/src/github.com/tarm/goserial/serial_posix.go
generated
vendored
@@ -1,107 +0,0 @@
|
||||
// +build !windows,cgo
|
||||
|
||||
package serial
|
||||
|
||||
// #include <termios.h>
|
||||
// #include <unistd.h>
|
||||
import "C"
|
||||
|
||||
// TODO: Maybe change to using syscall package + ioctl instead of cgo
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"syscall"
|
||||
//"unsafe"
|
||||
)
|
||||
|
||||
func openPort(name string, baud int) (rwc io.ReadWriteCloser, err error) {
|
||||
f, err := os.OpenFile(name, syscall.O_RDWR|syscall.O_NOCTTY|syscall.O_NONBLOCK, 0666)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
fd := C.int(f.Fd())
|
||||
if C.isatty(fd) != 1 {
|
||||
f.Close()
|
||||
return nil, errors.New("File is not a tty")
|
||||
}
|
||||
|
||||
var st C.struct_termios
|
||||
_, err = C.tcgetattr(fd, &st)
|
||||
if err != nil {
|
||||
f.Close()
|
||||
return nil, err
|
||||
}
|
||||
var speed C.speed_t
|
||||
switch baud {
|
||||
case 115200:
|
||||
speed = C.B115200
|
||||
case 57600:
|
||||
speed = C.B57600
|
||||
case 38400:
|
||||
speed = C.B38400
|
||||
case 19200:
|
||||
speed = C.B19200
|
||||
case 9600:
|
||||
speed = C.B9600
|
||||
case 4800:
|
||||
speed = C.B4800
|
||||
case 2400:
|
||||
speed = C.B2400
|
||||
default:
|
||||
f.Close()
|
||||
return nil, fmt.Errorf("Unknown baud rate %v", baud)
|
||||
}
|
||||
|
||||
_, err = C.cfsetispeed(&st, speed)
|
||||
if err != nil {
|
||||
f.Close()
|
||||
return nil, err
|
||||
}
|
||||
_, err = C.cfsetospeed(&st, speed)
|
||||
if err != nil {
|
||||
f.Close()
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Select local mode
|
||||
st.c_cflag |= (C.CLOCAL | C.CREAD)
|
||||
|
||||
// Select raw mode
|
||||
st.c_lflag &= ^C.tcflag_t(C.ICANON | C.ECHO | C.ECHOE | C.ISIG)
|
||||
st.c_oflag &= ^C.tcflag_t(C.OPOST)
|
||||
|
||||
_, err = C.tcsetattr(fd, C.TCSANOW, &st)
|
||||
if err != nil {
|
||||
f.Close()
|
||||
return nil, err
|
||||
}
|
||||
|
||||
//fmt.Println("Tweaking", name)
|
||||
r1, _, e := syscall.Syscall(syscall.SYS_FCNTL,
|
||||
uintptr(f.Fd()),
|
||||
uintptr(syscall.F_SETFL),
|
||||
uintptr(0))
|
||||
if e != 0 || r1 != 0 {
|
||||
s := fmt.Sprint("Clearing NONBLOCK syscall error:", e, r1)
|
||||
f.Close()
|
||||
return nil, errors.New(s)
|
||||
}
|
||||
|
||||
/*
|
||||
r1, _, e = syscall.Syscall(syscall.SYS_IOCTL,
|
||||
uintptr(f.Fd()),
|
||||
uintptr(0x80045402), // IOSSIOSPEED
|
||||
uintptr(unsafe.Pointer(&baud)));
|
||||
if e != 0 || r1 != 0 {
|
||||
s := fmt.Sprint("Baudrate syscall error:", e, r1)
|
||||
f.Close()
|
||||
return nil, os.NewError(s)
|
||||
}
|
||||
*/
|
||||
|
||||
return f, nil
|
||||
}
|
263
Godeps/_workspace/src/github.com/tarm/goserial/serial_windows.go
generated
vendored
263
Godeps/_workspace/src/github.com/tarm/goserial/serial_windows.go
generated
vendored
@@ -1,263 +0,0 @@
|
||||
// +build windows
|
||||
|
||||
package serial
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"sync"
|
||||
"syscall"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
type serialPort struct {
|
||||
f *os.File
|
||||
fd syscall.Handle
|
||||
rl sync.Mutex
|
||||
wl sync.Mutex
|
||||
ro *syscall.Overlapped
|
||||
wo *syscall.Overlapped
|
||||
}
|
||||
|
||||
type structDCB struct {
|
||||
DCBlength, BaudRate uint32
|
||||
flags [4]byte
|
||||
wReserved, XonLim, XoffLim uint16
|
||||
ByteSize, Parity, StopBits byte
|
||||
XonChar, XoffChar, ErrorChar, EofChar, EvtChar byte
|
||||
wReserved1 uint16
|
||||
}
|
||||
|
||||
type structTimeouts struct {
|
||||
ReadIntervalTimeout uint32
|
||||
ReadTotalTimeoutMultiplier uint32
|
||||
ReadTotalTimeoutConstant uint32
|
||||
WriteTotalTimeoutMultiplier uint32
|
||||
WriteTotalTimeoutConstant uint32
|
||||
}
|
||||
|
||||
func openPort(name string, baud int) (rwc io.ReadWriteCloser, err error) {
|
||||
if len(name) > 0 && name[0] != '\\' {
|
||||
name = "\\\\.\\" + name
|
||||
}
|
||||
|
||||
h, err := syscall.CreateFile(syscall.StringToUTF16Ptr(name),
|
||||
syscall.GENERIC_READ|syscall.GENERIC_WRITE,
|
||||
0,
|
||||
nil,
|
||||
syscall.OPEN_EXISTING,
|
||||
syscall.FILE_ATTRIBUTE_NORMAL|syscall.FILE_FLAG_OVERLAPPED,
|
||||
0)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
f := os.NewFile(uintptr(h), name)
|
||||
defer func() {
|
||||
if err != nil {
|
||||
f.Close()
|
||||
}
|
||||
}()
|
||||
|
||||
if err = setCommState(h, baud); err != nil {
|
||||
return
|
||||
}
|
||||
if err = setupComm(h, 64, 64); err != nil {
|
||||
return
|
||||
}
|
||||
if err = setCommTimeouts(h); err != nil {
|
||||
return
|
||||
}
|
||||
if err = setCommMask(h); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
ro, err := newOverlapped()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
wo, err := newOverlapped()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
port := new(serialPort)
|
||||
port.f = f
|
||||
port.fd = h
|
||||
port.ro = ro
|
||||
port.wo = wo
|
||||
|
||||
return port, nil
|
||||
}
|
||||
|
||||
func (p *serialPort) Close() error {
|
||||
return p.f.Close()
|
||||
}
|
||||
|
||||
func (p *serialPort) Write(buf []byte) (int, error) {
|
||||
p.wl.Lock()
|
||||
defer p.wl.Unlock()
|
||||
|
||||
if err := resetEvent(p.wo.HEvent); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
var n uint32
|
||||
err := syscall.WriteFile(p.fd, buf, &n, p.wo)
|
||||
if err != nil && err != syscall.ERROR_IO_PENDING {
|
||||
return int(n), err
|
||||
}
|
||||
return getOverlappedResult(p.fd, p.wo)
|
||||
}
|
||||
|
||||
func (p *serialPort) Read(buf []byte) (int, error) {
|
||||
if p == nil || p.f == nil {
|
||||
return 0, fmt.Errorf("Invalid port on read %v %v", p, p.f)
|
||||
}
|
||||
|
||||
p.rl.Lock()
|
||||
defer p.rl.Unlock()
|
||||
|
||||
if err := resetEvent(p.ro.HEvent); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
var done uint32
|
||||
err := syscall.ReadFile(p.fd, buf, &done, p.ro)
|
||||
if err != nil && err != syscall.ERROR_IO_PENDING {
|
||||
return int(done), err
|
||||
}
|
||||
return getOverlappedResult(p.fd, p.ro)
|
||||
}
|
||||
|
||||
var (
|
||||
nSetCommState,
|
||||
nSetCommTimeouts,
|
||||
nSetCommMask,
|
||||
nSetupComm,
|
||||
nGetOverlappedResult,
|
||||
nCreateEvent,
|
||||
nResetEvent uintptr
|
||||
)
|
||||
|
||||
func init() {
|
||||
k32, err := syscall.LoadLibrary("kernel32.dll")
|
||||
if err != nil {
|
||||
panic("LoadLibrary " + err.Error())
|
||||
}
|
||||
defer syscall.FreeLibrary(k32)
|
||||
|
||||
nSetCommState = getProcAddr(k32, "SetCommState")
|
||||
nSetCommTimeouts = getProcAddr(k32, "SetCommTimeouts")
|
||||
nSetCommMask = getProcAddr(k32, "SetCommMask")
|
||||
nSetupComm = getProcAddr(k32, "SetupComm")
|
||||
nGetOverlappedResult = getProcAddr(k32, "GetOverlappedResult")
|
||||
nCreateEvent = getProcAddr(k32, "CreateEventW")
|
||||
nResetEvent = getProcAddr(k32, "ResetEvent")
|
||||
}
|
||||
|
||||
func getProcAddr(lib syscall.Handle, name string) uintptr {
|
||||
addr, err := syscall.GetProcAddress(lib, name)
|
||||
if err != nil {
|
||||
panic(name + " " + err.Error())
|
||||
}
|
||||
return addr
|
||||
}
|
||||
|
||||
func setCommState(h syscall.Handle, baud int) error {
|
||||
var params structDCB
|
||||
params.DCBlength = uint32(unsafe.Sizeof(params))
|
||||
|
||||
params.flags[0] = 0x01 // fBinary
|
||||
params.flags[0] |= 0x10 // Assert DSR
|
||||
|
||||
params.BaudRate = uint32(baud)
|
||||
params.ByteSize = 8
|
||||
|
||||
r, _, err := syscall.Syscall(nSetCommState, 2, uintptr(h), uintptr(unsafe.Pointer(¶ms)), 0)
|
||||
if r == 0 {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func setCommTimeouts(h syscall.Handle) error {
|
||||
var timeouts structTimeouts
|
||||
const MAXDWORD = 1<<32 - 1
|
||||
timeouts.ReadIntervalTimeout = MAXDWORD
|
||||
timeouts.ReadTotalTimeoutMultiplier = MAXDWORD
|
||||
timeouts.ReadTotalTimeoutConstant = MAXDWORD - 1
|
||||
|
||||
/* From http://msdn.microsoft.com/en-us/library/aa363190(v=VS.85).aspx
|
||||
|
||||
For blocking I/O see below:
|
||||
|
||||
Remarks:
|
||||
|
||||
If an application sets ReadIntervalTimeout and
|
||||
ReadTotalTimeoutMultiplier to MAXDWORD and sets
|
||||
ReadTotalTimeoutConstant to a value greater than zero and
|
||||
less than MAXDWORD, one of the following occurs when the
|
||||
ReadFile function is called:
|
||||
|
||||
If there are any bytes in the input buffer, ReadFile returns
|
||||
immediately with the bytes in the buffer.
|
||||
|
||||
If there are no bytes in the input buffer, ReadFile waits
|
||||
until a byte arrives and then returns immediately.
|
||||
|
||||
If no bytes arrive within the time specified by
|
||||
ReadTotalTimeoutConstant, ReadFile times out.
|
||||
*/
|
||||
|
||||
r, _, err := syscall.Syscall(nSetCommTimeouts, 2, uintptr(h), uintptr(unsafe.Pointer(&timeouts)), 0)
|
||||
if r == 0 {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func setupComm(h syscall.Handle, in, out int) error {
|
||||
r, _, err := syscall.Syscall(nSetupComm, 3, uintptr(h), uintptr(in), uintptr(out))
|
||||
if r == 0 {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func setCommMask(h syscall.Handle) error {
|
||||
const EV_RXCHAR = 0x0001
|
||||
r, _, err := syscall.Syscall(nSetCommMask, 2, uintptr(h), EV_RXCHAR, 0)
|
||||
if r == 0 {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func resetEvent(h syscall.Handle) error {
|
||||
r, _, err := syscall.Syscall(nResetEvent, 1, uintptr(h), 0, 0)
|
||||
if r == 0 {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func newOverlapped() (*syscall.Overlapped, error) {
|
||||
var overlapped syscall.Overlapped
|
||||
r, _, err := syscall.Syscall6(nCreateEvent, 4, 0, 1, 0, 0, 0, 0)
|
||||
if r == 0 {
|
||||
return nil, err
|
||||
}
|
||||
overlapped.HEvent = syscall.Handle(r)
|
||||
return &overlapped, nil
|
||||
}
|
||||
|
||||
func getOverlappedResult(h syscall.Handle, overlapped *syscall.Overlapped) (int, error) {
|
||||
var n int
|
||||
r, _, err := syscall.Syscall6(nGetOverlappedResult, 4,
|
||||
uintptr(h),
|
||||
uintptr(unsafe.Pointer(overlapped)),
|
||||
uintptr(unsafe.Pointer(&n)), 1, 0, 0)
|
||||
if r == 0 {
|
||||
return n, err
|
||||
}
|
||||
|
||||
return n, nil
|
||||
}
|
202
LICENSE
202
LICENSE
@@ -1,202 +0,0 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright {yyyy} {name of copyright owner}
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
@@ -1,3 +0,0 @@
|
||||
Alex Crawford <alex.crawford@coreos.com> (@crawford)
|
||||
Jonathan Boulle <jonathan.boulle@coreos.com> (@jonboulle)
|
||||
Brian Waldon <brian.waldon@coreos.com> (@bcwaldon)
|
5
NOTICE
5
NOTICE
@@ -1,5 +0,0 @@
|
||||
CoreOS Project
|
||||
Copyright 2014 CoreOS, Inc
|
||||
|
||||
This product includes software developed at CoreOS, Inc.
|
||||
(http://www.coreos.com/).
|
80
README.md
80
README.md
@@ -1,79 +1,9 @@
|
||||
# coreos-cloudinit [](https://travis-ci.org/coreos/coreos-cloudinit)
|
||||
# coreos-cloudinit
|
||||
|
||||
coreos-cloudinit enables a user to customize CoreOS machines by providing either a cloud-config document or an executable script through user-data.
|
||||
coreos-cloudinit allows a user to customize CoreOS machines by providing either an executable script or a cloud-config document as instance user-data. See below to learn how to use these features.
|
||||
|
||||
## Configuration with cloud-config
|
||||
## Supported Cloud-Config Features
|
||||
|
||||
A subset of the [official cloud-config spec][official-cloud-config] is implemented by coreos-cloudinit.
|
||||
Additionally, several [CoreOS-specific options][custom-cloud-config] have been implemented to support interacting with unit files, bootstrapping etcd clusters, and more.
|
||||
All supported cloud-config parameters are [documented here][all-cloud-config].
|
||||
Only a subset of [cloud-config functionality][cloud-config] is implemented. A set of custom parameters were added to the cloud-config format that are specific to CoreOS, which are [documented here](https://github.com/coreos/coreos-cloudinit/tree/master/Documentation/cloud-config.md).
|
||||
|
||||
[official-cloud-config]: http://cloudinit.readthedocs.org/en/latest/topics/format.html#cloud-config-data
|
||||
[custom-cloud-config]: https://github.com/coreos/coreos-cloudinit/blob/master/Documentation/cloud-config.md#coreos-parameters
|
||||
[all-cloud-config]: https://github.com/coreos/coreos-cloudinit/tree/master/Documentation/cloud-config.md
|
||||
|
||||
The following is an example cloud-config document:
|
||||
|
||||
```
|
||||
#cloud-config
|
||||
|
||||
coreos:
|
||||
units:
|
||||
- name: etcd.service
|
||||
command: start
|
||||
|
||||
users:
|
||||
- name: core
|
||||
passwd: $1$allJZawX$00S5T756I5PGdQga5qhqv1
|
||||
|
||||
write_files:
|
||||
- path: /etc/resolv.conf
|
||||
content: |
|
||||
nameserver 192.0.2.2
|
||||
nameserver 192.0.2.3
|
||||
```
|
||||
|
||||
## Executing a Script
|
||||
|
||||
coreos-cloudinit supports executing user-data as a script instead of parsing it as a cloud-config document.
|
||||
Make sure the first line of your user-data is a shebang and coreos-cloudinit will attempt to execute it:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
echo 'Hello, world!'
|
||||
```
|
||||
|
||||
## user-data Field Substitution
|
||||
|
||||
coreos-cloudinit will replace the following set of tokens in your user-data with system-generated values.
|
||||
|
||||
| Token | Description |
|
||||
| ------------- | ----------- |
|
||||
| $public_ipv4 | Public IPv4 address of machine |
|
||||
| $private_ipv4 | Private IPv4 address of machine |
|
||||
|
||||
These values are determined by CoreOS based on the given provider on which your machine is running.
|
||||
Read more about provider-specific functionality in the [CoreOS OEM documentation][oem-doc].
|
||||
|
||||
[oem-doc]: https://coreos.com/docs/sdk-distributors/distributors/notes-for-distributors/
|
||||
|
||||
For example, submitting the following user-data...
|
||||
|
||||
```
|
||||
#cloud-config
|
||||
coreos:
|
||||
etcd:
|
||||
addr: $public_ipv4:4001
|
||||
peer-addr: $private_ipv4:7001
|
||||
```
|
||||
|
||||
...will result in this cloud-config document being executed:
|
||||
|
||||
```
|
||||
#cloud-config
|
||||
coreos:
|
||||
etcd:
|
||||
addr: 203.0.113.29:4001
|
||||
peer-addr: 192.0.2.13:7001
|
||||
```
|
||||
[cloud-config]: http://cloudinit.readthedocs.org/en/latest/topics/format.html#cloud-config-data
|
||||
|
12
build
12
build
@@ -1,14 +1,6 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
ORG_PATH="github.com/coreos"
|
||||
REPO_PATH="${ORG_PATH}/coreos-cloudinit"
|
||||
|
||||
if [ ! -h gopath/src/${REPO_PATH} ]; then
|
||||
mkdir -p gopath/src/${ORG_PATH}
|
||||
ln -s ../../../.. gopath/src/${REPO_PATH} || exit 255
|
||||
fi
|
||||
|
||||
export GOBIN=${PWD}/bin
|
||||
export GOPATH=${PWD}/gopath
|
||||
export GOPATH=${PWD}
|
||||
|
||||
go build -o bin/coreos-cloudinit ${REPO_PATH}
|
||||
go build -o bin/coreos-cloudinit github.com/coreos/coreos-cloudinit
|
||||
|
143
cloudinit/cloud_config.go
Normal file
143
cloudinit/cloud_config.go
Normal file
@@ -0,0 +1,143 @@
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/third_party/launchpad.net/goyaml"
|
||||
)
|
||||
|
||||
const DefaultSSHKeyName = "coreos-cloudinit"
|
||||
|
||||
type CloudConfig struct {
|
||||
SSHAuthorizedKeys []string `yaml:"ssh_authorized_keys"`
|
||||
Coreos struct {
|
||||
Etcd struct{ Discovery_URL string }
|
||||
Fleet struct{ Autostart bool }
|
||||
Units []Unit
|
||||
}
|
||||
WriteFiles []WriteFile `yaml:"write_files"`
|
||||
Hostname string
|
||||
Users []User
|
||||
}
|
||||
|
||||
func NewCloudConfig(contents []byte) (*CloudConfig, error) {
|
||||
var cfg CloudConfig
|
||||
err := goyaml.Unmarshal(contents, &cfg)
|
||||
return &cfg, err
|
||||
}
|
||||
|
||||
func (cc CloudConfig) String() string {
|
||||
bytes, err := goyaml.Marshal(cc)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
stringified := string(bytes)
|
||||
stringified = fmt.Sprintf("#cloud-config\n%s", stringified)
|
||||
|
||||
return stringified
|
||||
}
|
||||
|
||||
func ApplyCloudConfig(cfg CloudConfig, sshKeyName string) error {
|
||||
if cfg.Hostname != "" {
|
||||
if err := SetHostname(cfg.Hostname); err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("Set hostname to %s", cfg.Hostname)
|
||||
}
|
||||
|
||||
if len(cfg.Users) > 0 {
|
||||
for _, user := range cfg.Users {
|
||||
if user.Name == "" {
|
||||
log.Printf("User object has no 'name' field, skipping")
|
||||
continue
|
||||
}
|
||||
|
||||
if UserExists(&user) {
|
||||
log.Printf("User '%s' exists, ignoring creation-time fields", user.Name)
|
||||
if user.PasswordHash != "" {
|
||||
log.Printf("Setting '%s' user's password", user.Name)
|
||||
if err := SetUserPassword(user.Name, user.PasswordHash); err != nil {
|
||||
log.Printf("Failed setting '%s' user's password: %v", user.Name, err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
} else {
|
||||
log.Printf("Creating user '%s'", user.Name)
|
||||
if err := CreateUser(&user); err != nil {
|
||||
log.Printf("Failed creating user '%s': %v", user.Name, err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if len(user.SSHAuthorizedKeys) > 0 {
|
||||
log.Printf("Authorizing %d SSH keys for user '%s'", len(user.SSHAuthorizedKeys), user.Name)
|
||||
if err := AuthorizeSSHKeys(user.Name, sshKeyName, user.SSHAuthorizedKeys); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(cfg.SSHAuthorizedKeys) > 0 {
|
||||
err := AuthorizeSSHKeys("core", sshKeyName, cfg.SSHAuthorizedKeys)
|
||||
if err == nil {
|
||||
log.Printf("Authorized SSH keys for core user")
|
||||
} else {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if len(cfg.WriteFiles) > 0 {
|
||||
for _, file := range cfg.WriteFiles {
|
||||
if err := ProcessWriteFile("/", &file); err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("Wrote file %s to filesystem", file.Path)
|
||||
}
|
||||
}
|
||||
|
||||
if cfg.Coreos.Etcd.Discovery_URL != "" {
|
||||
err := PersistEtcdDiscoveryURL(cfg.Coreos.Etcd.Discovery_URL)
|
||||
if err == nil {
|
||||
log.Printf("Consumed etcd discovery url")
|
||||
} else {
|
||||
log.Fatalf("Failed to persist etcd discovery url to filesystem: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
if len(cfg.Coreos.Units) > 0 {
|
||||
for _, unit := range cfg.Coreos.Units {
|
||||
log.Printf("Placing unit %s on filesystem", unit.Name)
|
||||
dst, err := PlaceUnit("/", &unit)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("Placed unit %s at %s", unit.Name, dst)
|
||||
|
||||
if unit.Group() != "network" {
|
||||
log.Printf("Enabling unit file %s", dst)
|
||||
if err := EnableUnitFile(dst, unit.Runtime); err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("Enabled unit %s", unit.Name)
|
||||
} else {
|
||||
log.Printf("Skipping enable for network-like unit %s", unit.Name)
|
||||
}
|
||||
}
|
||||
DaemonReload()
|
||||
StartUnits(cfg.Coreos.Units)
|
||||
}
|
||||
|
||||
if cfg.Coreos.Fleet.Autostart {
|
||||
err := StartUnitByName("fleet.service")
|
||||
if err == nil {
|
||||
log.Printf("Started fleet service.")
|
||||
} else {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
252
cloudinit/cloud_config_test.go
Normal file
252
cloudinit/cloud_config_test.go
Normal file
@@ -0,0 +1,252 @@
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// Assert that the parsing of a cloud config file "generally works"
|
||||
func TestCloudConfigEmpty(t *testing.T) {
|
||||
cfg, err := NewCloudConfig([]byte{})
|
||||
if err != nil {
|
||||
t.Fatalf("Encountered unexpected error :%v", err)
|
||||
}
|
||||
|
||||
keys := cfg.SSHAuthorizedKeys
|
||||
if len(keys) != 0 {
|
||||
t.Error("Parsed incorrect number of SSH keys")
|
||||
}
|
||||
|
||||
if cfg.Coreos.Etcd.Discovery_URL != "" {
|
||||
t.Error("Parsed incorrect value of discovery url")
|
||||
}
|
||||
|
||||
if cfg.Coreos.Fleet.Autostart {
|
||||
t.Error("Expected AutostartFleet not to be defined")
|
||||
}
|
||||
|
||||
if len(cfg.WriteFiles) != 0 {
|
||||
t.Error("Expected zero WriteFiles")
|
||||
}
|
||||
|
||||
if cfg.Hostname != "" {
|
||||
t.Errorf("Expected hostname to be empty, got '%s'", cfg.Hostname)
|
||||
}
|
||||
}
|
||||
|
||||
// Assert that the parsing of a cloud config file "generally works"
|
||||
func TestCloudConfig(t *testing.T) {
|
||||
contents := []byte(`
|
||||
coreos:
|
||||
etcd:
|
||||
discovery_url: "https://discovery.etcd.io/827c73219eeb2fa5530027c37bf18877"
|
||||
fleet:
|
||||
autostart: Yes
|
||||
units:
|
||||
- name: 50-eth0.network
|
||||
runtime: yes
|
||||
content: '[Match]
|
||||
|
||||
Name=eth47
|
||||
|
||||
|
||||
[Network]
|
||||
|
||||
Address=10.209.171.177/19
|
||||
|
||||
'
|
||||
ssh_authorized_keys:
|
||||
- foobar
|
||||
- foobaz
|
||||
write_files:
|
||||
- content: |
|
||||
penny
|
||||
elroy
|
||||
path: /etc/dogepack.conf
|
||||
permissions: '0644'
|
||||
owner: root:dogepack
|
||||
hostname: trontastic
|
||||
`)
|
||||
cfg, err := NewCloudConfig(contents)
|
||||
if err != nil {
|
||||
t.Fatalf("Encountered unexpected error :%v", err)
|
||||
}
|
||||
|
||||
keys := cfg.SSHAuthorizedKeys
|
||||
if len(keys) != 2 {
|
||||
t.Error("Parsed incorrect number of SSH keys")
|
||||
} else if keys[0] != "foobar" {
|
||||
t.Error("Expected first SSH key to be 'foobar'")
|
||||
} else if keys[1] != "foobaz" {
|
||||
t.Error("Expected first SSH key to be 'foobaz'")
|
||||
}
|
||||
|
||||
if cfg.Coreos.Etcd.Discovery_URL != "https://discovery.etcd.io/827c73219eeb2fa5530027c37bf18877" {
|
||||
t.Error("Failed to parse etcd discovery url")
|
||||
}
|
||||
|
||||
if !cfg.Coreos.Fleet.Autostart {
|
||||
t.Error("Expected AutostartFleet to be true")
|
||||
}
|
||||
|
||||
if len(cfg.WriteFiles) != 1 {
|
||||
t.Error("Failed to parse correct number of write_files")
|
||||
} else {
|
||||
wf := cfg.WriteFiles[0]
|
||||
if wf.Content != "penny\nelroy\n" {
|
||||
t.Errorf("WriteFile has incorrect contents '%s'", wf.Content)
|
||||
}
|
||||
if wf.Encoding != "" {
|
||||
t.Errorf("WriteFile has incorrect encoding %s", wf.Encoding)
|
||||
}
|
||||
if wf.Permissions != "0644" {
|
||||
t.Errorf("WriteFile has incorrect permissions %s", wf.Permissions)
|
||||
}
|
||||
if wf.Path != "/etc/dogepack.conf" {
|
||||
t.Errorf("WriteFile has incorrect path %s", wf.Path)
|
||||
}
|
||||
if wf.Owner != "root:dogepack" {
|
||||
t.Errorf("WriteFile has incorrect owner %s", wf.Owner)
|
||||
}
|
||||
}
|
||||
|
||||
if len(cfg.Coreos.Units) != 1 {
|
||||
t.Error("Failed to parse correct number of units")
|
||||
} else {
|
||||
u := cfg.Coreos.Units[0]
|
||||
expect := `[Match]
|
||||
Name=eth47
|
||||
|
||||
[Network]
|
||||
Address=10.209.171.177/19
|
||||
`
|
||||
if u.Content != expect {
|
||||
t.Errorf("Unit has incorrect contents '%s'.\nExpected '%s'.", u.Content, expect)
|
||||
}
|
||||
if u.Runtime != true {
|
||||
t.Errorf("Unit has incorrect runtime value")
|
||||
}
|
||||
if u.Name != "50-eth0.network" {
|
||||
t.Errorf("Unit has incorrect name %s", u.Name)
|
||||
}
|
||||
if u.Type() != "network" {
|
||||
t.Errorf("Unit has incorrect type '%s'", u.Type())
|
||||
}
|
||||
}
|
||||
|
||||
if cfg.Hostname != "trontastic" {
|
||||
t.Errorf("Failed to parse hostname")
|
||||
}
|
||||
}
|
||||
|
||||
// Assert that our interface conversion doesn't panic
|
||||
func TestCloudConfigKeysNotList(t *testing.T) {
|
||||
contents := []byte(`
|
||||
ssh_authorized_keys:
|
||||
- foo: bar
|
||||
`)
|
||||
cfg, err := NewCloudConfig(contents)
|
||||
if err != nil {
|
||||
t.Fatalf("Encountered unexpected error :%v", err)
|
||||
}
|
||||
|
||||
keys := cfg.SSHAuthorizedKeys
|
||||
if len(keys) != 0 {
|
||||
t.Error("Parsed incorrect number of SSH keys")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCloudConfigSerializationHeader(t *testing.T) {
|
||||
cfg, _ := NewCloudConfig([]byte{})
|
||||
contents := cfg.String()
|
||||
header := strings.SplitN(contents, "\n", 2)[0]
|
||||
if header != "#cloud-config" {
|
||||
t.Fatalf("Serialized config did not have expected header")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCloudConfigUsers(t *testing.T) {
|
||||
contents := []byte(`
|
||||
users:
|
||||
- name: elroy
|
||||
passwd: somehash
|
||||
ssh-authorized-keys:
|
||||
- somekey
|
||||
gecos: arbitrary comment
|
||||
homedir: /home/place
|
||||
no-create-home: yes
|
||||
primary-group: things
|
||||
groups:
|
||||
- ping
|
||||
- pong
|
||||
no-user-group: true
|
||||
system: y
|
||||
no-log-init: True
|
||||
`)
|
||||
cfg, err := NewCloudConfig(contents)
|
||||
if err != nil {
|
||||
t.Fatalf("Encountered unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if len(cfg.Users) != 1 {
|
||||
t.Fatalf("Parsed %d users, expected 1", cfg.Users)
|
||||
}
|
||||
|
||||
user := cfg.Users[0]
|
||||
|
||||
if user.Name != "elroy" {
|
||||
t.Errorf("User name is %q, expected 'elroy'", user.Name)
|
||||
}
|
||||
|
||||
if user.PasswordHash != "somehash" {
|
||||
t.Errorf("User passwd is %q, expected 'somehash'", user.PasswordHash)
|
||||
}
|
||||
|
||||
if keys := user.SSHAuthorizedKeys; len(keys) != 1 {
|
||||
t.Errorf("Parsed %d ssh keys, expected 1", len(keys))
|
||||
} else {
|
||||
key := user.SSHAuthorizedKeys[0]
|
||||
if key != "somekey" {
|
||||
t.Errorf("User SSH key is %q, expected 'somekey'", key)
|
||||
}
|
||||
}
|
||||
|
||||
if user.GECOS != "arbitrary comment" {
|
||||
t.Errorf("Failed to parse gecos field, got %q", user.GECOS)
|
||||
}
|
||||
|
||||
if user.Homedir != "/home/place" {
|
||||
t.Errorf("Failed to parse homedir field, got %q", user.Homedir)
|
||||
}
|
||||
|
||||
if !user.NoCreateHome {
|
||||
t.Errorf("Failed to parse no-create-home field")
|
||||
}
|
||||
|
||||
if user.PrimaryGroup != "things"{
|
||||
t.Errorf("Failed to parse primary-group field, got %q", user.PrimaryGroup)
|
||||
}
|
||||
|
||||
if len(user.Groups) != 2 {
|
||||
t.Errorf("Failed to parse 2 goups, got %d", len(user.Groups))
|
||||
} else {
|
||||
if user.Groups[0] != "ping" {
|
||||
t.Errorf("First group was %q, not expected value 'ping'", user.Groups[0])
|
||||
}
|
||||
if user.Groups[1] != "pong" {
|
||||
t.Errorf("First group was %q, not expected value 'pong'", user.Groups[1])
|
||||
}
|
||||
}
|
||||
|
||||
if !user.NoUserGroup {
|
||||
t.Errorf("Failed to parse no-user-group field")
|
||||
}
|
||||
|
||||
if !user.System {
|
||||
t.Errorf("Failed to parse system field")
|
||||
}
|
||||
|
||||
if !user.NoLogInit {
|
||||
t.Errorf("Failed to parse no-log-init field")
|
||||
}
|
||||
}
|
25
cloudinit/etcd.go
Normal file
25
cloudinit/etcd.go
Normal file
@@ -0,0 +1,25 @@
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"os"
|
||||
"path"
|
||||
)
|
||||
|
||||
const (
|
||||
etcdDiscoveryPath = "/var/run/etcd/bootstrap.disco"
|
||||
)
|
||||
|
||||
func PersistEtcdDiscoveryURL(url string) error {
|
||||
dir := path.Dir(etcdDiscoveryPath)
|
||||
if _, err := os.Stat(dir); err != nil {
|
||||
log.Printf("Creating directory /var/run/etcd")
|
||||
err := os.MkdirAll(dir, os.FileMode(0644))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return ioutil.WriteFile(etcdDiscoveryPath, []byte(url), os.FileMode(0644))
|
||||
}
|
36
cloudinit/metadata_service.go
Normal file
36
cloudinit/metadata_service.go
Normal file
@@ -0,0 +1,36 @@
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
)
|
||||
|
||||
type metadataService struct {
|
||||
url string
|
||||
client http.Client
|
||||
}
|
||||
|
||||
func NewMetadataService(url string) *metadataService {
|
||||
return &metadataService{url, http.Client{}}
|
||||
}
|
||||
|
||||
func (ms *metadataService) UserData() ([]byte, error) {
|
||||
resp, err := ms.client.Get(ms.url)
|
||||
if err != nil {
|
||||
return []byte{}, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode / 100 != 2 {
|
||||
return []byte{}, nil
|
||||
}
|
||||
|
||||
respBytes, err := ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return respBytes, nil
|
||||
}
|
||||
|
||||
|
@@ -1,18 +1,4 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package system
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"fmt"
|
157
cloudinit/systemd.go
Normal file
157
cloudinit/systemd.go
Normal file
@@ -0,0 +1,157 @@
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/third_party/github.com/coreos/go-systemd/dbus"
|
||||
)
|
||||
|
||||
type Unit struct {
|
||||
Name string
|
||||
Runtime bool
|
||||
Content string
|
||||
}
|
||||
|
||||
func (u *Unit) Type() string {
|
||||
ext := filepath.Ext(u.Name)
|
||||
return strings.TrimLeft(ext, ".")
|
||||
}
|
||||
|
||||
func (u *Unit) Group() (group string) {
|
||||
t := u.Type()
|
||||
if t == "network" || t == "netdev" || t == "link" {
|
||||
group = "network"
|
||||
} else {
|
||||
group = "system"
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
type Script []byte
|
||||
|
||||
func PlaceUnit(root string, u *Unit) (string, error) {
|
||||
dir := "etc"
|
||||
if u.Runtime {
|
||||
dir = "run"
|
||||
}
|
||||
|
||||
dst := path.Join(root, dir, "systemd", u.Group())
|
||||
if _, err := os.Stat(dst); os.IsNotExist(err) {
|
||||
if err := os.MkdirAll(dst, os.FileMode(0755)); err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
|
||||
dst = path.Join(dst, u.Name)
|
||||
err := ioutil.WriteFile(dst, []byte(u.Content), os.FileMode(0644))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return dst, nil
|
||||
}
|
||||
|
||||
func EnableUnitFile(file string, runtime bool) error {
|
||||
conn, err := dbus.New()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
files := []string{file}
|
||||
_, _, err = conn.EnableUnitFiles(files, runtime, true)
|
||||
return err
|
||||
}
|
||||
|
||||
func separateNetworkUnits(units []Unit) ([]Unit, []Unit) {
|
||||
networkUnits := make([]Unit, 0)
|
||||
nonNetworkUnits := make([]Unit, 0)
|
||||
for _, unit := range units {
|
||||
if unit.Group() == "network" {
|
||||
networkUnits = append(networkUnits, unit)
|
||||
} else {
|
||||
nonNetworkUnits = append(nonNetworkUnits, unit)
|
||||
}
|
||||
}
|
||||
return networkUnits, nonNetworkUnits
|
||||
}
|
||||
|
||||
func StartUnits(units []Unit) error {
|
||||
networkUnits, nonNetworkUnits := separateNetworkUnits(units)
|
||||
if len(networkUnits) > 0 {
|
||||
if err := RestartUnitByName("systemd-networkd.service"); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
for _, unit := range nonNetworkUnits {
|
||||
if err := RestartUnitByName(unit.Name); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func DaemonReload() error {
|
||||
conn, err := dbus.New()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
_, err = conn.Reload()
|
||||
return err
|
||||
}
|
||||
|
||||
func RestartUnitByName(name string) error {
|
||||
log.Printf("Restarting unit %s", name)
|
||||
conn, err := dbus.New()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
output, err := conn.RestartUnit(name, "replace")
|
||||
log.Printf("Restart completed with '%s'", output)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func StartUnitByName(name string) error {
|
||||
conn, err := dbus.New()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
_, err = conn.StartUnit(name, "replace")
|
||||
return err
|
||||
}
|
||||
|
||||
func ExecuteScript(scriptPath string) (string, error) {
|
||||
props := []dbus.Property{
|
||||
dbus.PropDescription("Unit generated and executed by coreos-cloudinit on behalf of user"),
|
||||
dbus.PropExecStart([]string{"/bin/bash", scriptPath}, false),
|
||||
}
|
||||
|
||||
base := path.Base(scriptPath)
|
||||
name := fmt.Sprintf("coreos-cloudinit-%s.service", base)
|
||||
|
||||
log.Printf("Creating transient systemd unit '%s'", name)
|
||||
|
||||
conn, err := dbus.New()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
_, err = conn.StartTransientUnit(name, "replace", props...)
|
||||
return name, err
|
||||
}
|
||||
|
||||
func SetHostname(hostname string) error {
|
||||
return exec.Command("hostnamectl", "set-hostname", hostname).Run()
|
||||
}
|
102
cloudinit/systemd_test.go
Normal file
102
cloudinit/systemd_test.go
Normal file
@@ -0,0 +1,102 @@
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
"syscall"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestPlaceNetworkUnit(t *testing.T) {
|
||||
u := Unit{
|
||||
Name: "50-eth0.network",
|
||||
Runtime: true,
|
||||
Content: `[Match]
|
||||
Name=eth47
|
||||
|
||||
[Network]
|
||||
Address=10.209.171.177/19
|
||||
`,
|
||||
}
|
||||
|
||||
dir, err := ioutil.TempDir(os.TempDir(), "coreos-cloudinit-")
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to create tempdir: %v", err)
|
||||
}
|
||||
defer syscall.Rmdir(dir)
|
||||
|
||||
if _, err := PlaceUnit(dir, &u); err != nil {
|
||||
t.Fatalf("PlaceUnit failed: %v", err)
|
||||
}
|
||||
|
||||
fullPath := path.Join(dir, "run", "systemd", "network", "50-eth0.network")
|
||||
fi, err := os.Stat(fullPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to stat file: %v", err)
|
||||
}
|
||||
|
||||
if fi.Mode() != os.FileMode(0644) {
|
||||
t.Errorf("File has incorrect mode: %v", fi.Mode())
|
||||
}
|
||||
|
||||
contents, err := ioutil.ReadFile(fullPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to read expected file: %v", err)
|
||||
}
|
||||
|
||||
expect := `[Match]
|
||||
Name=eth47
|
||||
|
||||
[Network]
|
||||
Address=10.209.171.177/19
|
||||
`
|
||||
if string(contents) != expect {
|
||||
t.Fatalf("File has incorrect contents '%s'.\nExpected '%s'", string(contents), expect)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPlaceMountUnit(t *testing.T) {
|
||||
u := Unit{
|
||||
Name: "media-state.mount",
|
||||
Runtime: false,
|
||||
Content: `[Mount]
|
||||
What=/dev/sdb1
|
||||
Where=/media/state
|
||||
`,
|
||||
}
|
||||
|
||||
dir, err := ioutil.TempDir(os.TempDir(), "coreos-cloudinit-")
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to create tempdir: %v", err)
|
||||
}
|
||||
defer syscall.Rmdir(dir)
|
||||
|
||||
if _, err := PlaceUnit(dir, &u); err != nil {
|
||||
t.Fatalf("PlaceUnit failed: %v", err)
|
||||
}
|
||||
|
||||
fullPath := path.Join(dir, "etc", "systemd", "system", "media-state.mount")
|
||||
fi, err := os.Stat(fullPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to stat file: %v", err)
|
||||
}
|
||||
|
||||
if fi.Mode() != os.FileMode(0644) {
|
||||
t.Errorf("File has incorrect mode: %v", fi.Mode())
|
||||
}
|
||||
|
||||
contents, err := ioutil.ReadFile(fullPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to read expected file: %v", err)
|
||||
}
|
||||
|
||||
expect := `[Mount]
|
||||
What=/dev/sdb1
|
||||
Where=/media/state
|
||||
`
|
||||
if string(contents) != expect {
|
||||
t.Fatalf("File has incorrect contents '%s'.\nExpected '%s'", string(contents), expect)
|
||||
}
|
||||
}
|
||||
|
@@ -1,18 +1,4 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package system
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
@@ -20,22 +6,32 @@ import (
|
||||
"os/exec"
|
||||
"os/user"
|
||||
"strings"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/config"
|
||||
)
|
||||
|
||||
func UserExists(u *config.User) bool {
|
||||
type User struct {
|
||||
Name string `yaml:"name"`
|
||||
PasswordHash string `yaml:"passwd"`
|
||||
SSHAuthorizedKeys []string `yaml:"ssh-authorized-keys"`
|
||||
GECOS string `yaml:"gecos"`
|
||||
Homedir string `yaml:"homedir"`
|
||||
NoCreateHome bool `yaml:"no-create-home"`
|
||||
PrimaryGroup string `yaml:"primary-group"`
|
||||
Groups []string `yaml:"groups"`
|
||||
NoUserGroup bool `yaml:"no-user-group"`
|
||||
System bool `yaml:"system"`
|
||||
NoLogInit bool `yaml:"no-log-init"`
|
||||
}
|
||||
|
||||
func UserExists(u *User) bool {
|
||||
_, err := user.Lookup(u.Name)
|
||||
return err == nil
|
||||
}
|
||||
|
||||
func CreateUser(u *config.User) error {
|
||||
func CreateUser(u *User) error {
|
||||
args := []string{}
|
||||
|
||||
if u.PasswordHash != "" {
|
||||
args = append(args, "--password", u.PasswordHash)
|
||||
} else {
|
||||
args = append(args, "--password", "*")
|
||||
}
|
||||
|
||||
if u.GECOS != "" {
|
||||
@@ -53,7 +49,7 @@ func CreateUser(u *config.User) error {
|
||||
}
|
||||
|
||||
if u.PrimaryGroup != "" {
|
||||
args = append(args, "--gid", u.PrimaryGroup)
|
||||
args = append(args, "--primary-group", u.PrimaryGroup)
|
||||
}
|
||||
|
||||
if len(u.Groups) > 0 {
|
||||
@@ -72,10 +68,6 @@ func CreateUser(u *config.User) error {
|
||||
args = append(args, "--no-log-init")
|
||||
}
|
||||
|
||||
if u.Shell != "" {
|
||||
args = append(args, "--shell", u.Shell)
|
||||
}
|
||||
|
||||
args = append(args, u.Name)
|
||||
|
||||
output, err := exec.Command("useradd", args...).CombinedOutput()
|
30
cloudinit/user_data.go
Normal file
30
cloudinit/user_data.go
Normal file
@@ -0,0 +1,30 @@
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"fmt"
|
||||
"log"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func ParseUserData(contents []byte) (interface{}, error) {
|
||||
bytereader := bytes.NewReader(contents)
|
||||
bufreader := bufio.NewReader(bytereader)
|
||||
header, _ := bufreader.ReadString('\n')
|
||||
|
||||
if strings.HasPrefix(header, "#!") {
|
||||
log.Printf("Parsing user-data as script")
|
||||
return Script(contents), nil
|
||||
|
||||
} else if header == "#cloud-config\n" {
|
||||
log.Printf("Parsing user-data as cloud-config")
|
||||
cfg, err := NewCloudConfig(contents)
|
||||
if err != nil {
|
||||
log.Fatal(err.Error())
|
||||
}
|
||||
return *cfg, nil
|
||||
} else {
|
||||
return nil, fmt.Errorf("Unrecognized user-data header: %s", header)
|
||||
}
|
||||
}
|
66
cloudinit/workspace.go
Normal file
66
cloudinit/workspace.go
Normal file
@@ -0,0 +1,66 @@
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
)
|
||||
|
||||
func PrepWorkspace(workspace string) error {
|
||||
// Ensure workspace exists and is a directory
|
||||
info, err := os.Stat(workspace)
|
||||
if err == nil {
|
||||
if !info.IsDir() {
|
||||
return fmt.Errorf("%s is not a directory", workspace)
|
||||
}
|
||||
} else {
|
||||
err = os.MkdirAll(workspace, 0755)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure scripts dir in workspace exists and is a directory
|
||||
scripts := path.Join(workspace, "scripts")
|
||||
info, err = os.Stat(scripts)
|
||||
if err == nil {
|
||||
if !info.IsDir() {
|
||||
return fmt.Errorf("%s is not a directory", scripts)
|
||||
}
|
||||
} else {
|
||||
err = os.Mkdir(scripts, 0755)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func PersistScriptInWorkspace(script Script, workspace string) (string, error) {
|
||||
scriptsDir := path.Join(workspace, "scripts")
|
||||
f, err := ioutil.TempFile(scriptsDir, "")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
f.Chmod(0744)
|
||||
|
||||
_, err = f.Write(script)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// Ensure script has been written to disk before returning, as the
|
||||
// next natural thing to do is execute it
|
||||
f.Sync()
|
||||
|
||||
return f.Name(), nil
|
||||
}
|
||||
|
||||
func PersistScriptUnitNameInWorkspace(name string, workspace string) error {
|
||||
unitPath := path.Join(workspace, "scripts", "unit-name")
|
||||
return ioutil.WriteFile(unitPath, []byte(name), 0644)
|
||||
}
|
46
cloudinit/write_file.go
Normal file
46
cloudinit/write_file.go
Normal file
@@ -0,0 +1,46 @@
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
type WriteFile struct {
|
||||
Encoding string
|
||||
Content string
|
||||
Owner string
|
||||
Path string
|
||||
Permissions string
|
||||
}
|
||||
|
||||
func ProcessWriteFile(base string, wf *WriteFile) error {
|
||||
fullPath := path.Join(base, wf.Path)
|
||||
|
||||
if err := os.MkdirAll(path.Dir(fullPath), os.FileMode(0744)); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Parse string representation of file mode as octal
|
||||
perm, err := strconv.ParseInt(wf.Permissions, 8, 32)
|
||||
if err != nil {
|
||||
return errors.New("Unable to parse file permissions as octal integer")
|
||||
}
|
||||
|
||||
if err := ioutil.WriteFile(fullPath, []byte(wf.Content), os.FileMode(perm)); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if wf.Owner != "" {
|
||||
// We shell out since we don't have a way to look up unix groups natively
|
||||
cmd := exec.Command("chown", wf.Owner, fullPath)
|
||||
if err := cmd.Run(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
81
cloudinit/write_file_test.go
Normal file
81
cloudinit/write_file_test.go
Normal file
@@ -0,0 +1,81 @@
|
||||
package cloudinit
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
"syscall"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestWriteFileUnencodedContent(t *testing.T) {
|
||||
wf := WriteFile{
|
||||
Path: "/tmp/foo",
|
||||
Content: "bar",
|
||||
Permissions: "0644",
|
||||
}
|
||||
dir, err := ioutil.TempDir(os.TempDir(), "coreos-cloudinit-")
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to create tempdir: %v", err)
|
||||
}
|
||||
defer syscall.Rmdir(dir)
|
||||
|
||||
if err := ProcessWriteFile(dir, &wf); err != nil {
|
||||
t.Fatalf("Processing of WriteFile failed: %v", err)
|
||||
}
|
||||
|
||||
fullPath := path.Join(dir, "tmp", "foo")
|
||||
|
||||
fi, err := os.Stat(fullPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to stat file: %v", err)
|
||||
}
|
||||
|
||||
if fi.Mode() != os.FileMode(0644) {
|
||||
t.Errorf("File has incorrect mode: %v", fi.Mode())
|
||||
}
|
||||
|
||||
contents, err := ioutil.ReadFile(fullPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to read expected file: %v", err)
|
||||
}
|
||||
|
||||
if string(contents) != "bar" {
|
||||
t.Fatalf("File has incorrect contents")
|
||||
}
|
||||
}
|
||||
|
||||
func TestWriteFileInvalidPermission(t *testing.T) {
|
||||
wf := WriteFile{
|
||||
Path: "/tmp/foo",
|
||||
Content: "bar",
|
||||
Permissions: "pants",
|
||||
}
|
||||
dir, err := ioutil.TempDir(os.TempDir(), "coreos-cloudinit-")
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to create tempdir: %v", err)
|
||||
}
|
||||
defer syscall.Rmdir(dir)
|
||||
|
||||
if err := ProcessWriteFile(dir, &wf); err == nil {
|
||||
t.Fatalf("Expected error to be raised when writing file with invalid permission")
|
||||
}
|
||||
}
|
||||
|
||||
func TestWriteFileEncodedContent(t *testing.T) {
|
||||
wf := WriteFile{
|
||||
Path: "/tmp/foo",
|
||||
Content: "",
|
||||
Encoding: "base64",
|
||||
}
|
||||
|
||||
dir, err := ioutil.TempDir(os.TempDir(), "coreos-cloudinit-")
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to create tempdir: %v", err)
|
||||
}
|
||||
defer syscall.Rmdir(dir)
|
||||
|
||||
if err := ProcessWriteFile(dir, &wf); err == nil {
|
||||
t.Fatalf("Expected error to be raised when writing file with encoding")
|
||||
}
|
||||
}
|
154
config/config.go
154
config/config.go
@@ -1,154 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/coreos/yaml"
|
||||
)
|
||||
|
||||
// CloudConfig encapsulates the entire cloud-config configuration file and maps
|
||||
// directly to YAML. Fields that cannot be set in the cloud-config (fields
|
||||
// used for internal use) have the YAML tag '-' so that they aren't marshalled.
|
||||
type CloudConfig struct {
|
||||
SSHAuthorizedKeys []string `yaml:"ssh_authorized_keys"`
|
||||
CoreOS CoreOS `yaml:"coreos"`
|
||||
WriteFiles []File `yaml:"write_files"`
|
||||
Hostname string `yaml:"hostname"`
|
||||
Users []User `yaml:"users"`
|
||||
ManageEtcHosts EtcHosts `yaml:"manage_etc_hosts"`
|
||||
}
|
||||
|
||||
type CoreOS struct {
|
||||
Etcd Etcd `yaml:"etcd"`
|
||||
Flannel Flannel `yaml:"flannel"`
|
||||
Fleet Fleet `yaml:"fleet"`
|
||||
Locksmith Locksmith `yaml:"locksmith"`
|
||||
OEM OEM `yaml:"oem"`
|
||||
Update Update `yaml:"update"`
|
||||
Units []Unit `yaml:"units"`
|
||||
}
|
||||
|
||||
func IsCloudConfig(userdata string) bool {
|
||||
header := strings.SplitN(userdata, "\n", 2)[0]
|
||||
|
||||
// Explicitly trim the header so we can handle user-data from
|
||||
// non-unix operating systems. The rest of the file is parsed
|
||||
// by yaml, which correctly handles CRLF.
|
||||
header = strings.TrimSuffix(header, "\r")
|
||||
|
||||
return (header == "#cloud-config")
|
||||
}
|
||||
|
||||
// NewCloudConfig instantiates a new CloudConfig from the given contents (a
|
||||
// string of YAML), returning any error encountered. It will ignore unknown
|
||||
// fields but log encountering them.
|
||||
func NewCloudConfig(contents string) (*CloudConfig, error) {
|
||||
yaml.UnmarshalMappingKeyTransform = func(nameIn string) (nameOut string) {
|
||||
return strings.Replace(nameIn, "-", "_", -1)
|
||||
}
|
||||
var cfg CloudConfig
|
||||
err := yaml.Unmarshal([]byte(contents), &cfg)
|
||||
return &cfg, err
|
||||
}
|
||||
|
||||
func (cc CloudConfig) String() string {
|
||||
bytes, err := yaml.Marshal(cc)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
stringified := string(bytes)
|
||||
stringified = fmt.Sprintf("#cloud-config\n%s", stringified)
|
||||
|
||||
return stringified
|
||||
}
|
||||
|
||||
// IsZero returns whether or not the parameter is the zero value for its type.
|
||||
// If the parameter is a struct, only the exported fields are considered.
|
||||
func IsZero(c interface{}) bool {
|
||||
return isZero(reflect.ValueOf(c))
|
||||
}
|
||||
|
||||
type ErrorValid struct {
|
||||
Value string
|
||||
Valid string
|
||||
Field string
|
||||
}
|
||||
|
||||
func (e ErrorValid) Error() string {
|
||||
return fmt.Sprintf("invalid value %q for option %q (valid options: %q)", e.Value, e.Field, e.Valid)
|
||||
}
|
||||
|
||||
// AssertStructValid checks the fields in the structure and makes sure that
|
||||
// they contain valid values as specified by the 'valid' flag. Empty fields are
|
||||
// implicitly valid.
|
||||
func AssertStructValid(c interface{}) error {
|
||||
ct := reflect.TypeOf(c)
|
||||
cv := reflect.ValueOf(c)
|
||||
for i := 0; i < ct.NumField(); i++ {
|
||||
ft := ct.Field(i)
|
||||
if !isFieldExported(ft) {
|
||||
continue
|
||||
}
|
||||
|
||||
if err := AssertValid(cv.Field(i), ft.Tag.Get("valid")); err != nil {
|
||||
err.Field = ft.Name
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// AssertValid checks to make sure that the given value is in the list of
|
||||
// valid values. Zero values are implicitly valid.
|
||||
func AssertValid(value reflect.Value, valid string) *ErrorValid {
|
||||
if valid == "" || isZero(value) {
|
||||
return nil
|
||||
}
|
||||
|
||||
vs := fmt.Sprintf("%v", value.Interface())
|
||||
if m, _ := regexp.MatchString(valid, vs); m {
|
||||
return nil
|
||||
}
|
||||
|
||||
return &ErrorValid{
|
||||
Value: vs,
|
||||
Valid: valid,
|
||||
}
|
||||
}
|
||||
|
||||
func isZero(v reflect.Value) bool {
|
||||
switch v.Kind() {
|
||||
case reflect.Struct:
|
||||
vt := v.Type()
|
||||
for i := 0; i < v.NumField(); i++ {
|
||||
if isFieldExported(vt.Field(i)) && !isZero(v.Field(i)) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
default:
|
||||
return v.Interface() == reflect.Zero(v.Type()).Interface()
|
||||
}
|
||||
}
|
||||
|
||||
func isFieldExported(f reflect.StructField) bool {
|
||||
return f.PkgPath == ""
|
||||
}
|
@@ -1,502 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"regexp"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestNewCloudConfig(t *testing.T) {
|
||||
tests := []struct {
|
||||
contents string
|
||||
|
||||
config CloudConfig
|
||||
}{
|
||||
{},
|
||||
{
|
||||
contents: "#cloud-config\nwrite_files:\n - path: underscore",
|
||||
config: CloudConfig{WriteFiles: []File{File{Path: "underscore"}}},
|
||||
},
|
||||
{
|
||||
contents: "#cloud-config\nwrite-files:\n - path: hyphen",
|
||||
config: CloudConfig{WriteFiles: []File{File{Path: "hyphen"}}},
|
||||
},
|
||||
{
|
||||
contents: "#cloud-config\ncoreos:\n update:\n reboot-strategy: off",
|
||||
config: CloudConfig{CoreOS: CoreOS{Update: Update{RebootStrategy: "off"}}},
|
||||
},
|
||||
{
|
||||
contents: "#cloud-config\ncoreos:\n update:\n reboot-strategy: false",
|
||||
config: CloudConfig{CoreOS: CoreOS{Update: Update{RebootStrategy: "false"}}},
|
||||
},
|
||||
{
|
||||
contents: "#cloud-config\nwrite_files:\n - permissions: 0744",
|
||||
config: CloudConfig{WriteFiles: []File{File{RawFilePermissions: "0744"}}},
|
||||
},
|
||||
{
|
||||
contents: "#cloud-config\nwrite_files:\n - permissions: 744",
|
||||
config: CloudConfig{WriteFiles: []File{File{RawFilePermissions: "744"}}},
|
||||
},
|
||||
{
|
||||
contents: "#cloud-config\nwrite_files:\n - permissions: '0744'",
|
||||
config: CloudConfig{WriteFiles: []File{File{RawFilePermissions: "0744"}}},
|
||||
},
|
||||
{
|
||||
contents: "#cloud-config\nwrite_files:\n - permissions: '744'",
|
||||
config: CloudConfig{WriteFiles: []File{File{RawFilePermissions: "744"}}},
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
config, err := NewCloudConfig(tt.contents)
|
||||
if err != nil {
|
||||
t.Errorf("bad error (test case #%d): want %v, got %s", i, nil, err)
|
||||
}
|
||||
if !reflect.DeepEqual(&tt.config, config) {
|
||||
t.Errorf("bad config (test case #%d): want %#v, got %#v", i, tt.config, config)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsZero(t *testing.T) {
|
||||
tests := []struct {
|
||||
c interface{}
|
||||
|
||||
empty bool
|
||||
}{
|
||||
{struct{}{}, true},
|
||||
{struct{ a, b string }{}, true},
|
||||
{struct{ A, b string }{}, true},
|
||||
{struct{ A, B string }{}, true},
|
||||
{struct{ A string }{A: "hello"}, false},
|
||||
{struct{ A int }{}, true},
|
||||
{struct{ A int }{A: 1}, false},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
if empty := IsZero(tt.c); tt.empty != empty {
|
||||
t.Errorf("bad result (%q): want %t, got %t", tt.c, tt.empty, empty)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestAssertStructValid(t *testing.T) {
|
||||
tests := []struct {
|
||||
c interface{}
|
||||
|
||||
err error
|
||||
}{
|
||||
{struct{}{}, nil},
|
||||
{struct {
|
||||
A, b string `valid:"^1|2$"`
|
||||
}{}, nil},
|
||||
{struct {
|
||||
A, b string `valid:"^1|2$"`
|
||||
}{A: "1", b: "2"}, nil},
|
||||
{struct {
|
||||
A, b string `valid:"^1|2$"`
|
||||
}{A: "1", b: "hello"}, nil},
|
||||
{struct {
|
||||
A, b string `valid:"^1|2$"`
|
||||
}{A: "hello", b: "2"}, &ErrorValid{Value: "hello", Field: "A", Valid: "^1|2$"}},
|
||||
{struct {
|
||||
A, b int `valid:"^1|2$"`
|
||||
}{}, nil},
|
||||
{struct {
|
||||
A, b int `valid:"^1|2$"`
|
||||
}{A: 1, b: 2}, nil},
|
||||
{struct {
|
||||
A, b int `valid:"^1|2$"`
|
||||
}{A: 1, b: 9}, nil},
|
||||
{struct {
|
||||
A, b int `valid:"^1|2$"`
|
||||
}{A: 9, b: 2}, &ErrorValid{Value: "9", Field: "A", Valid: "^1|2$"}},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
if err := AssertStructValid(tt.c); !reflect.DeepEqual(tt.err, err) {
|
||||
t.Errorf("bad result (%q): want %q, got %q", tt.c, tt.err, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigCompile(t *testing.T) {
|
||||
tests := []interface{}{
|
||||
Etcd{},
|
||||
File{},
|
||||
Flannel{},
|
||||
Fleet{},
|
||||
Locksmith{},
|
||||
OEM{},
|
||||
Unit{},
|
||||
Update{},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
ttt := reflect.TypeOf(tt)
|
||||
for i := 0; i < ttt.NumField(); i++ {
|
||||
ft := ttt.Field(i)
|
||||
if !isFieldExported(ft) {
|
||||
continue
|
||||
}
|
||||
|
||||
if _, err := regexp.Compile(ft.Tag.Get("valid")); err != nil {
|
||||
t.Errorf("bad regexp(%s.%s): want %v, got %s", ttt.Name(), ft.Name, nil, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestCloudConfigUnknownKeys(t *testing.T) {
|
||||
contents := `
|
||||
coreos:
|
||||
etcd:
|
||||
discovery: "https://discovery.etcd.io/827c73219eeb2fa5530027c37bf18877"
|
||||
coreos_unknown:
|
||||
foo: "bar"
|
||||
section_unknown:
|
||||
dunno:
|
||||
something
|
||||
bare_unknown:
|
||||
bar
|
||||
write_files:
|
||||
- content: fun
|
||||
path: /var/party
|
||||
file_unknown: nofun
|
||||
users:
|
||||
- name: fry
|
||||
passwd: somehash
|
||||
user_unknown: philip
|
||||
hostname:
|
||||
foo
|
||||
`
|
||||
cfg, err := NewCloudConfig(contents)
|
||||
if err != nil {
|
||||
t.Fatalf("error instantiating CloudConfig with unknown keys: %v", err)
|
||||
}
|
||||
if cfg.Hostname != "foo" {
|
||||
t.Fatalf("hostname not correctly set when invalid keys are present")
|
||||
}
|
||||
if cfg.CoreOS.Etcd.Discovery != "https://discovery.etcd.io/827c73219eeb2fa5530027c37bf18877" {
|
||||
t.Fatalf("etcd section not correctly set when invalid keys are present")
|
||||
}
|
||||
if len(cfg.WriteFiles) < 1 || cfg.WriteFiles[0].Content != "fun" || cfg.WriteFiles[0].Path != "/var/party" {
|
||||
t.Fatalf("write_files section not correctly set when invalid keys are present")
|
||||
}
|
||||
if len(cfg.Users) < 1 || cfg.Users[0].Name != "fry" || cfg.Users[0].PasswordHash != "somehash" {
|
||||
t.Fatalf("users section not correctly set when invalid keys are present")
|
||||
}
|
||||
}
|
||||
|
||||
// Assert that the parsing of a cloud config file "generally works"
|
||||
func TestCloudConfigEmpty(t *testing.T) {
|
||||
cfg, err := NewCloudConfig("")
|
||||
if err != nil {
|
||||
t.Fatalf("Encountered unexpected error :%v", err)
|
||||
}
|
||||
|
||||
keys := cfg.SSHAuthorizedKeys
|
||||
if len(keys) != 0 {
|
||||
t.Error("Parsed incorrect number of SSH keys")
|
||||
}
|
||||
|
||||
if len(cfg.WriteFiles) != 0 {
|
||||
t.Error("Expected zero WriteFiles")
|
||||
}
|
||||
|
||||
if cfg.Hostname != "" {
|
||||
t.Errorf("Expected hostname to be empty, got '%s'", cfg.Hostname)
|
||||
}
|
||||
}
|
||||
|
||||
// Assert that the parsing of a cloud config file "generally works"
|
||||
func TestCloudConfig(t *testing.T) {
|
||||
contents := `
|
||||
coreos:
|
||||
etcd:
|
||||
discovery: "https://discovery.etcd.io/827c73219eeb2fa5530027c37bf18877"
|
||||
update:
|
||||
reboot_strategy: reboot
|
||||
units:
|
||||
- name: 50-eth0.network
|
||||
runtime: yes
|
||||
content: '[Match]
|
||||
|
||||
Name=eth47
|
||||
|
||||
|
||||
[Network]
|
||||
|
||||
Address=10.209.171.177/19
|
||||
|
||||
'
|
||||
oem:
|
||||
id: rackspace
|
||||
name: Rackspace Cloud Servers
|
||||
version_id: 168.0.0
|
||||
home_url: https://www.rackspace.com/cloud/servers/
|
||||
bug_report_url: https://github.com/coreos/coreos-overlay
|
||||
ssh_authorized_keys:
|
||||
- foobar
|
||||
- foobaz
|
||||
write_files:
|
||||
- content: |
|
||||
penny
|
||||
elroy
|
||||
path: /etc/dogepack.conf
|
||||
permissions: '0644'
|
||||
owner: root:dogepack
|
||||
hostname: trontastic
|
||||
`
|
||||
cfg, err := NewCloudConfig(contents)
|
||||
if err != nil {
|
||||
t.Fatalf("Encountered unexpected error :%v", err)
|
||||
}
|
||||
|
||||
keys := cfg.SSHAuthorizedKeys
|
||||
if len(keys) != 2 {
|
||||
t.Error("Parsed incorrect number of SSH keys")
|
||||
} else if keys[0] != "foobar" {
|
||||
t.Error("Expected first SSH key to be 'foobar'")
|
||||
} else if keys[1] != "foobaz" {
|
||||
t.Error("Expected first SSH key to be 'foobaz'")
|
||||
}
|
||||
|
||||
if len(cfg.WriteFiles) != 1 {
|
||||
t.Error("Failed to parse correct number of write_files")
|
||||
} else {
|
||||
wf := cfg.WriteFiles[0]
|
||||
if wf.Content != "penny\nelroy\n" {
|
||||
t.Errorf("WriteFile has incorrect contents '%s'", wf.Content)
|
||||
}
|
||||
if wf.Encoding != "" {
|
||||
t.Errorf("WriteFile has incorrect encoding %s", wf.Encoding)
|
||||
}
|
||||
if wf.RawFilePermissions != "0644" {
|
||||
t.Errorf("WriteFile has incorrect permissions %s", wf.RawFilePermissions)
|
||||
}
|
||||
if wf.Path != "/etc/dogepack.conf" {
|
||||
t.Errorf("WriteFile has incorrect path %s", wf.Path)
|
||||
}
|
||||
if wf.Owner != "root:dogepack" {
|
||||
t.Errorf("WriteFile has incorrect owner %s", wf.Owner)
|
||||
}
|
||||
}
|
||||
|
||||
if len(cfg.CoreOS.Units) != 1 {
|
||||
t.Error("Failed to parse correct number of units")
|
||||
} else {
|
||||
u := cfg.CoreOS.Units[0]
|
||||
expect := `[Match]
|
||||
Name=eth47
|
||||
|
||||
[Network]
|
||||
Address=10.209.171.177/19
|
||||
`
|
||||
if u.Content != expect {
|
||||
t.Errorf("Unit has incorrect contents '%s'.\nExpected '%s'.", u.Content, expect)
|
||||
}
|
||||
if u.Runtime != true {
|
||||
t.Errorf("Unit has incorrect runtime value")
|
||||
}
|
||||
if u.Name != "50-eth0.network" {
|
||||
t.Errorf("Unit has incorrect name %s", u.Name)
|
||||
}
|
||||
}
|
||||
|
||||
if cfg.CoreOS.OEM.ID != "rackspace" {
|
||||
t.Errorf("Failed parsing coreos.oem. Expected ID 'rackspace', got %q.", cfg.CoreOS.OEM.ID)
|
||||
}
|
||||
|
||||
if cfg.Hostname != "trontastic" {
|
||||
t.Errorf("Failed to parse hostname")
|
||||
}
|
||||
if cfg.CoreOS.Update.RebootStrategy != "reboot" {
|
||||
t.Errorf("Failed to parse locksmith strategy")
|
||||
}
|
||||
}
|
||||
|
||||
// Assert that our interface conversion doesn't panic
|
||||
func TestCloudConfigKeysNotList(t *testing.T) {
|
||||
contents := `
|
||||
ssh_authorized_keys:
|
||||
- foo: bar
|
||||
`
|
||||
cfg, err := NewCloudConfig(contents)
|
||||
if err != nil {
|
||||
t.Fatalf("Encountered unexpected error: %v", err)
|
||||
}
|
||||
|
||||
keys := cfg.SSHAuthorizedKeys
|
||||
if len(keys) != 0 {
|
||||
t.Error("Parsed incorrect number of SSH keys")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCloudConfigSerializationHeader(t *testing.T) {
|
||||
cfg, _ := NewCloudConfig("")
|
||||
contents := cfg.String()
|
||||
header := strings.SplitN(contents, "\n", 2)[0]
|
||||
if header != "#cloud-config" {
|
||||
t.Fatalf("Serialized config did not have expected header")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCloudConfigUsers(t *testing.T) {
|
||||
contents := `
|
||||
users:
|
||||
- name: elroy
|
||||
passwd: somehash
|
||||
ssh_authorized_keys:
|
||||
- somekey
|
||||
gecos: arbitrary comment
|
||||
homedir: /home/place
|
||||
no_create_home: yes
|
||||
primary_group: things
|
||||
groups:
|
||||
- ping
|
||||
- pong
|
||||
no_user_group: true
|
||||
system: y
|
||||
no_log_init: True
|
||||
shell: /bin/sh
|
||||
`
|
||||
cfg, err := NewCloudConfig(contents)
|
||||
if err != nil {
|
||||
t.Fatalf("Encountered unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if len(cfg.Users) != 1 {
|
||||
t.Fatalf("Parsed %d users, expected 1", len(cfg.Users))
|
||||
}
|
||||
|
||||
user := cfg.Users[0]
|
||||
|
||||
if user.Name != "elroy" {
|
||||
t.Errorf("User name is %q, expected 'elroy'", user.Name)
|
||||
}
|
||||
|
||||
if user.PasswordHash != "somehash" {
|
||||
t.Errorf("User passwd is %q, expected 'somehash'", user.PasswordHash)
|
||||
}
|
||||
|
||||
if keys := user.SSHAuthorizedKeys; len(keys) != 1 {
|
||||
t.Errorf("Parsed %d ssh keys, expected 1", len(keys))
|
||||
} else {
|
||||
key := user.SSHAuthorizedKeys[0]
|
||||
if key != "somekey" {
|
||||
t.Errorf("User SSH key is %q, expected 'somekey'", key)
|
||||
}
|
||||
}
|
||||
|
||||
if user.GECOS != "arbitrary comment" {
|
||||
t.Errorf("Failed to parse gecos field, got %q", user.GECOS)
|
||||
}
|
||||
|
||||
if user.Homedir != "/home/place" {
|
||||
t.Errorf("Failed to parse homedir field, got %q", user.Homedir)
|
||||
}
|
||||
|
||||
if !user.NoCreateHome {
|
||||
t.Errorf("Failed to parse no_create_home field")
|
||||
}
|
||||
|
||||
if user.PrimaryGroup != "things" {
|
||||
t.Errorf("Failed to parse primary_group field, got %q", user.PrimaryGroup)
|
||||
}
|
||||
|
||||
if len(user.Groups) != 2 {
|
||||
t.Errorf("Failed to parse 2 goups, got %d", len(user.Groups))
|
||||
} else {
|
||||
if user.Groups[0] != "ping" {
|
||||
t.Errorf("First group was %q, not expected value 'ping'", user.Groups[0])
|
||||
}
|
||||
if user.Groups[1] != "pong" {
|
||||
t.Errorf("First group was %q, not expected value 'pong'", user.Groups[1])
|
||||
}
|
||||
}
|
||||
|
||||
if !user.NoUserGroup {
|
||||
t.Errorf("Failed to parse no_user_group field")
|
||||
}
|
||||
|
||||
if !user.System {
|
||||
t.Errorf("Failed to parse system field")
|
||||
}
|
||||
|
||||
if !user.NoLogInit {
|
||||
t.Errorf("Failed to parse no_log_init field")
|
||||
}
|
||||
|
||||
if user.Shell != "/bin/sh" {
|
||||
t.Errorf("Failed to parse shell field, got %q", user.Shell)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCloudConfigUsersGithubUser(t *testing.T) {
|
||||
|
||||
contents := `
|
||||
users:
|
||||
- name: elroy
|
||||
coreos_ssh_import_github: bcwaldon
|
||||
`
|
||||
cfg, err := NewCloudConfig(contents)
|
||||
if err != nil {
|
||||
t.Fatalf("Encountered unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if len(cfg.Users) != 1 {
|
||||
t.Fatalf("Parsed %d users, expected 1", len(cfg.Users))
|
||||
}
|
||||
|
||||
user := cfg.Users[0]
|
||||
|
||||
if user.Name != "elroy" {
|
||||
t.Errorf("User name is %q, expected 'elroy'", user.Name)
|
||||
}
|
||||
|
||||
if user.SSHImportGithubUser != "bcwaldon" {
|
||||
t.Errorf("github user is %q, expected 'bcwaldon'", user.SSHImportGithubUser)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCloudConfigUsersSSHImportURL(t *testing.T) {
|
||||
contents := `
|
||||
users:
|
||||
- name: elroy
|
||||
coreos_ssh_import_url: https://token:x-auth-token@github.enterprise.com/api/v3/polvi/keys
|
||||
`
|
||||
cfg, err := NewCloudConfig(contents)
|
||||
if err != nil {
|
||||
t.Fatalf("Encountered unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if len(cfg.Users) != 1 {
|
||||
t.Fatalf("Parsed %d users, expected 1", len(cfg.Users))
|
||||
}
|
||||
|
||||
user := cfg.Users[0]
|
||||
|
||||
if user.Name != "elroy" {
|
||||
t.Errorf("User name is %q, expected 'elroy'", user.Name)
|
||||
}
|
||||
|
||||
if user.SSHImportURL != "https://token:x-auth-token@github.enterprise.com/api/v3/polvi/keys" {
|
||||
t.Errorf("ssh import url is %q, expected 'https://token:x-auth-token@github.enterprise.com/api/v3/polvi/keys'", user.SSHImportURL)
|
||||
}
|
||||
}
|
@@ -1,56 +0,0 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"encoding/base64"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
func DecodeBase64Content(content string) ([]byte, error) {
|
||||
output, err := base64.StdEncoding.DecodeString(content)
|
||||
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Unable to decode base64: %q", err)
|
||||
}
|
||||
|
||||
return output, nil
|
||||
}
|
||||
|
||||
func DecodeGzipContent(content string) ([]byte, error) {
|
||||
gzr, err := gzip.NewReader(bytes.NewReader([]byte(content)))
|
||||
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Unable to decode gzip: %q", err)
|
||||
}
|
||||
defer gzr.Close()
|
||||
|
||||
buf := new(bytes.Buffer)
|
||||
buf.ReadFrom(gzr)
|
||||
|
||||
return buf.Bytes(), nil
|
||||
}
|
||||
|
||||
func DecodeContent(content string, encoding string) ([]byte, error) {
|
||||
switch encoding {
|
||||
case "":
|
||||
return []byte(content), nil
|
||||
|
||||
case "b64", "base64":
|
||||
return DecodeBase64Content(content)
|
||||
|
||||
case "gz", "gzip":
|
||||
return DecodeGzipContent(content)
|
||||
|
||||
case "gz+base64", "gzip+base64", "gz+b64", "gzip+b64":
|
||||
gz, err := DecodeBase64Content(content)
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return DecodeGzipContent(string(gz))
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("Unsupported encoding %q", encoding)
|
||||
}
|
@@ -1,17 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
type EtcHosts string
|
@@ -1,67 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
type Etcd struct {
|
||||
Addr string `yaml:"addr" env:"ETCD_ADDR"`
|
||||
AdvertiseClientURLs string `yaml:"advertise_client_urls" env:"ETCD_ADVERTISE_CLIENT_URLS"`
|
||||
BindAddr string `yaml:"bind_addr" env:"ETCD_BIND_ADDR"`
|
||||
CAFile string `yaml:"ca_file" env:"ETCD_CA_FILE"`
|
||||
CertFile string `yaml:"cert_file" env:"ETCD_CERT_FILE"`
|
||||
ClusterActiveSize int `yaml:"cluster_active_size" env:"ETCD_CLUSTER_ACTIVE_SIZE"`
|
||||
ClusterRemoveDelay float64 `yaml:"cluster_remove_delay" env:"ETCD_CLUSTER_REMOVE_DELAY"`
|
||||
ClusterSyncInterval float64 `yaml:"cluster_sync_interval" env:"ETCD_CLUSTER_SYNC_INTERVAL"`
|
||||
CorsOrigins string `yaml:"cors" env:"ETCD_CORS"`
|
||||
DataDir string `yaml:"data_dir" env:"ETCD_DATA_DIR"`
|
||||
Discovery string `yaml:"discovery" env:"ETCD_DISCOVERY"`
|
||||
DiscoveryFallback string `yaml:"discovery_fallback" env:"ETCD_DISCOVERY_FALLBACK"`
|
||||
DiscoverySRV string `yaml:"discovery_srv" env:"ETCD_DISCOVERY_SRV"`
|
||||
DiscoveryProxy string `yaml:"discovery_proxy" env:"ETCD_DISCOVERY_PROXY"`
|
||||
ElectionTimeout int `yaml:"election_timeout" env:"ETCD_ELECTION_TIMEOUT"`
|
||||
ForceNewCluster bool `yaml:"force_new_cluster" env:"ETCD_FORCE_NEW_CLUSTER"`
|
||||
GraphiteHost string `yaml:"graphite_host" env:"ETCD_GRAPHITE_HOST"`
|
||||
HeartbeatInterval int `yaml:"heartbeat_interval" env:"ETCD_HEARTBEAT_INTERVAL"`
|
||||
HTTPReadTimeout float64 `yaml:"http_read_timeout" env:"ETCD_HTTP_READ_TIMEOUT"`
|
||||
HTTPWriteTimeout float64 `yaml:"http_write_timeout" env:"ETCD_HTTP_WRITE_TIMEOUT"`
|
||||
InitialAdvertisePeerURLs string `yaml:"initial_advertise_peer_urls" env:"ETCD_INITIAL_ADVERTISE_PEER_URLS"`
|
||||
InitialCluster string `yaml:"initial_cluster" env:"ETCD_INITIAL_CLUSTER"`
|
||||
InitialClusterState string `yaml:"initial_cluster_state" env:"ETCD_INITIAL_CLUSTER_STATE"`
|
||||
InitialClusterToken string `yaml:"initial_cluster_token" env:"ETCD_INITIAL_CLUSTER_TOKEN"`
|
||||
KeyFile string `yaml:"key_file" env:"ETCD_KEY_FILE"`
|
||||
ListenClientURLs string `yaml:"listen_client_urls" env:"ETCD_LISTEN_CLIENT_URLS"`
|
||||
ListenPeerURLs string `yaml:"listen_peer_urls" env:"ETCD_LISTEN_PEER_URLS"`
|
||||
MaxResultBuffer int `yaml:"max_result_buffer" env:"ETCD_MAX_RESULT_BUFFER"`
|
||||
MaxRetryAttempts int `yaml:"max_retry_attempts" env:"ETCD_MAX_RETRY_ATTEMPTS"`
|
||||
MaxSnapshots int `yaml:"max_snapshots" env:"ETCD_MAX_SNAPSHOTS"`
|
||||
MaxWALs int `yaml:"max_wals" env:"ETCD_MAX_WALS"`
|
||||
Name string `yaml:"name" env:"ETCD_NAME"`
|
||||
PeerAddr string `yaml:"peer_addr" env:"ETCD_PEER_ADDR"`
|
||||
PeerBindAddr string `yaml:"peer_bind_addr" env:"ETCD_PEER_BIND_ADDR"`
|
||||
PeerCAFile string `yaml:"peer_ca_file" env:"ETCD_PEER_CA_FILE"`
|
||||
PeerCertFile string `yaml:"peer_cert_file" env:"ETCD_PEER_CERT_FILE"`
|
||||
PeerElectionTimeout int `yaml:"peer_election_timeout" env:"ETCD_PEER_ELECTION_TIMEOUT"`
|
||||
PeerHeartbeatInterval int `yaml:"peer_heartbeat_interval" env:"ETCD_PEER_HEARTBEAT_INTERVAL"`
|
||||
PeerKeyFile string `yaml:"peer_key_file" env:"ETCD_PEER_KEY_FILE"`
|
||||
Peers string `yaml:"peers" env:"ETCD_PEERS"`
|
||||
PeersFile string `yaml:"peers_file" env:"ETCD_PEERS_FILE"`
|
||||
Proxy string `yaml:"proxy" env:"ETCD_PROXY"`
|
||||
RetryInterval float64 `yaml:"retry_interval" env:"ETCD_RETRY_INTERVAL"`
|
||||
Snapshot bool `yaml:"snapshot" env:"ETCD_SNAPSHOT"`
|
||||
SnapshotCount int `yaml:"snapshot_count" env:"ETCD_SNAPSHOTCOUNT"`
|
||||
StrTrace string `yaml:"trace" env:"ETCD_TRACE"`
|
||||
Verbose bool `yaml:"verbose" env:"ETCD_VERBOSE"`
|
||||
VeryVerbose bool `yaml:"very_verbose" env:"ETCD_VERY_VERBOSE"`
|
||||
VeryVeryVerbose bool `yaml:"very_very_verbose" env:"ETCD_VERY_VERY_VERBOSE"`
|
||||
}
|
@@ -1,23 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
type File struct {
|
||||
Encoding string `yaml:"encoding" valid:"^(base64|b64|gz|gzip|gz\\+base64|gzip\\+base64|gz\\+b64|gzip\\+b64)$"`
|
||||
Content string `yaml:"content"`
|
||||
Owner string `yaml:"owner"`
|
||||
Path string `yaml:"path"`
|
||||
RawFilePermissions string `yaml:"permissions" valid:"^0?[0-7]{3,4}$"`
|
||||
}
|
@@ -1,71 +0,0 @@
|
||||
/*
|
||||
Copyright 2014 CoreOS, Inc.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestEncodingValid(t *testing.T) {
|
||||
tests := []struct {
|
||||
value string
|
||||
|
||||
isValid bool
|
||||
}{
|
||||
{value: "base64", isValid: true},
|
||||
{value: "b64", isValid: true},
|
||||
{value: "gz", isValid: true},
|
||||
{value: "gzip", isValid: true},
|
||||
{value: "gz+base64", isValid: true},
|
||||
{value: "gzip+base64", isValid: true},
|
||||
{value: "gz+b64", isValid: true},
|
||||
{value: "gzip+b64", isValid: true},
|
||||
{value: "gzzzzbase64", isValid: false},
|
||||
{value: "gzipppbase64", isValid: false},
|
||||
{value: "unknown", isValid: false},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
isValid := (nil == AssertStructValid(File{Encoding: tt.value}))
|
||||
if tt.isValid != isValid {
|
||||
t.Errorf("bad assert (%s): want %t, got %t", tt.value, tt.isValid, isValid)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestRawFilePermissionsValid(t *testing.T) {
|
||||
tests := []struct {
|
||||
value string
|
||||
|
||||
isValid bool
|
||||
}{
|
||||
{value: "744", isValid: true},
|
||||
{value: "0744", isValid: true},
|
||||
{value: "1744", isValid: true},
|
||||
{value: "01744", isValid: true},
|
||||
{value: "11744", isValid: false},
|
||||
{value: "rwxr--r--", isValid: false},
|
||||
{value: "800", isValid: false},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
isValid := (nil == AssertStructValid(File{RawFilePermissions: tt.value}))
|
||||
if tt.isValid != isValid {
|
||||
t.Errorf("bad assert (%s): want %t, got %t", tt.value, tt.isValid, isValid)
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,26 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
type Flannel struct {
|
||||
EtcdEndpoints string `yaml:"etcd_endpoints" env:"FLANNELD_ETCD_ENDPOINTS"`
|
||||
EtcdCAFile string `yaml:"etcd_cafile" env:"FLANNELD_ETCD_CAFILE"`
|
||||
EtcdCertFile string `yaml:"etcd_certfile" env:"FLANNELD_ETCD_CERTFILE"`
|
||||
EtcdKeyFile string `yaml:"etcd_keyfile" env:"FLANNELD_ETCD_KEYFILE"`
|
||||
EtcdPrefix string `yaml:"etcd_prefix" env:"FLANNELD_ETCD_PREFIX"`
|
||||
IPMasq string `yaml:"ip_masq" env:"FLANNELD_IP_MASQ"`
|
||||
SubnetFile string `yaml:"subnet_file" env:"FLANNELD_SUBNET_FILE"`
|
||||
Iface string `yaml:"interface" env:"FLANNELD_IFACE"`
|
||||
}
|
@@ -1,29 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
type Fleet struct {
|
||||
AgentTTL string `yaml:"agent_ttl" env:"FLEET_AGENT_TTL"`
|
||||
EngineReconcileInterval float64 `yaml:"engine_reconcile_interval" env:"FLEET_ENGINE_RECONCILE_INTERVAL"`
|
||||
EtcdCAFile string `yaml:"etcd_cafile" env:"FLEET_ETCD_CAFILE"`
|
||||
EtcdCertFile string `yaml:"etcd_certfile" env:"FLEET_ETCD_CERTFILE"`
|
||||
EtcdKeyFile string `yaml:"etcd_keyfile" env:"FLEET_ETCD_KEYFILE"`
|
||||
EtcdKeyPrefix string `yaml:"etcd_key_prefix" env:"FLEET_ETCD_KEY_PREFIX"`
|
||||
EtcdRequestTimeout float64 `yaml:"etcd_request_timeout" env:"FLEET_ETCD_REQUEST_TIMEOUT"`
|
||||
EtcdServers string `yaml:"etcd_servers" env:"FLEET_ETCD_SERVERS"`
|
||||
Metadata string `yaml:"metadata" env:"FLEET_METADATA"`
|
||||
PublicIP string `yaml:"public_ip" env:"FLEET_PUBLIC_IP"`
|
||||
Verbosity int `yaml:"verbosity" env:"FLEET_VERBOSITY"`
|
||||
}
|
@@ -1,22 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
type Locksmith struct {
|
||||
Endpoint string `yaml:"endpoint" env:"LOCKSMITHD_ENDPOINT"`
|
||||
EtcdCAFile string `yaml:"etcd_cafile" env:"LOCKSMITHD_ETCD_CAFILE"`
|
||||
EtcdCertFile string `yaml:"etcd_certfile" env:"LOCKSMITHD_ETCD_CERTFILE"`
|
||||
EtcdKeyFile string `yaml:"etcd_keyfile" env:"LOCKSMITHD_ETCD_KEYFILE"`
|
||||
}
|
@@ -1,23 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
type OEM struct {
|
||||
ID string `yaml:"id"`
|
||||
Name string `yaml:"name"`
|
||||
VersionID string `yaml:"version_id"`
|
||||
HomeURL string `yaml:"home_url"`
|
||||
BugReportURL string `yaml:"bug_report_url"`
|
||||
}
|
@@ -1,31 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
type Script []byte
|
||||
|
||||
func IsScript(userdata string) bool {
|
||||
header := strings.SplitN(userdata, "\n", 2)[0]
|
||||
return strings.HasPrefix(header, "#!")
|
||||
}
|
||||
|
||||
func NewScript(userdata string) (*Script, error) {
|
||||
s := Script(userdata)
|
||||
return &s, nil
|
||||
}
|
@@ -1,30 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
type Unit struct {
|
||||
Name string `yaml:"name"`
|
||||
Mask bool `yaml:"mask"`
|
||||
Enable bool `yaml:"enable"`
|
||||
Runtime bool `yaml:"runtime"`
|
||||
Content string `yaml:"content"`
|
||||
Command string `yaml:"command" valid:"^(start|stop|restart|reload|try-restart|reload-or-restart|reload-or-try-restart)$"`
|
||||
DropIns []UnitDropIn `yaml:"drop_ins"`
|
||||
}
|
||||
|
||||
type UnitDropIn struct {
|
||||
Name string `yaml:"name"`
|
||||
Content string `yaml:"content"`
|
||||
}
|
@@ -1,46 +0,0 @@
|
||||
/*
|
||||
Copyright 2014 CoreOS, Inc.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestCommandValid(t *testing.T) {
|
||||
tests := []struct {
|
||||
value string
|
||||
|
||||
isValid bool
|
||||
}{
|
||||
{value: "start", isValid: true},
|
||||
{value: "stop", isValid: true},
|
||||
{value: "restart", isValid: true},
|
||||
{value: "reload", isValid: true},
|
||||
{value: "try-restart", isValid: true},
|
||||
{value: "reload-or-restart", isValid: true},
|
||||
{value: "reload-or-try-restart", isValid: true},
|
||||
{value: "tryrestart", isValid: false},
|
||||
{value: "unknown", isValid: false},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
isValid := (nil == AssertStructValid(Unit{Command: tt.value}))
|
||||
if tt.isValid != isValid {
|
||||
t.Errorf("bad assert (%s): want %t, got %t", tt.value, tt.isValid, isValid)
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,21 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
type Update struct {
|
||||
RebootStrategy string `yaml:"reboot_strategy" env:"REBOOT_STRATEGY" valid:"^(best-effort|etcd-lock|reboot|off)$"`
|
||||
Group string `yaml:"group" env:"GROUP"`
|
||||
Server string `yaml:"server" env:"SERVER"`
|
||||
}
|
@@ -1,43 +0,0 @@
|
||||
/*
|
||||
Copyright 2014 CoreOS, Inc.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestRebootStrategyValid(t *testing.T) {
|
||||
tests := []struct {
|
||||
value string
|
||||
|
||||
isValid bool
|
||||
}{
|
||||
{value: "best-effort", isValid: true},
|
||||
{value: "etcd-lock", isValid: true},
|
||||
{value: "reboot", isValid: true},
|
||||
{value: "off", isValid: true},
|
||||
{value: "besteffort", isValid: false},
|
||||
{value: "unknown", isValid: false},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
isValid := (nil == AssertStructValid(Update{RebootStrategy: tt.value}))
|
||||
if tt.isValid != isValid {
|
||||
t.Errorf("bad assert (%s): want %t, got %t", tt.value, tt.isValid, isValid)
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,33 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
type User struct {
|
||||
Name string `yaml:"name"`
|
||||
PasswordHash string `yaml:"passwd"`
|
||||
SSHAuthorizedKeys []string `yaml:"ssh_authorized_keys"`
|
||||
SSHImportGithubUser string `yaml:"coreos_ssh_import_github"`
|
||||
SSHImportGithubUsers []string `yaml:"coreos_ssh_import_github_users"`
|
||||
SSHImportURL string `yaml:"coreos_ssh_import_url"`
|
||||
GECOS string `yaml:"gecos"`
|
||||
Homedir string `yaml:"homedir"`
|
||||
NoCreateHome bool `yaml:"no_create_home"`
|
||||
PrimaryGroup string `yaml:"primary_group"`
|
||||
Groups []string `yaml:"groups"`
|
||||
NoUserGroup bool `yaml:"no_user_group"`
|
||||
System bool `yaml:"system"`
|
||||
NoLogInit bool `yaml:"no_log_init"`
|
||||
Shell string `yaml:"shell"`
|
||||
}
|
@@ -1,52 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package validate
|
||||
|
||||
import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
// context represents the current position within a newline-delimited string.
|
||||
// Each line is loaded, one by one, into currentLine (newline omitted) and
|
||||
// lineNumber keeps track of its position within the original string.
|
||||
type context struct {
|
||||
currentLine string
|
||||
remainingLines string
|
||||
lineNumber int
|
||||
}
|
||||
|
||||
// Increment moves the context to the next line (if available).
|
||||
func (c *context) Increment() {
|
||||
if c.currentLine == "" && c.remainingLines == "" {
|
||||
return
|
||||
}
|
||||
|
||||
lines := strings.SplitN(c.remainingLines, "\n", 2)
|
||||
c.currentLine = lines[0]
|
||||
if len(lines) == 2 {
|
||||
c.remainingLines = lines[1]
|
||||
} else {
|
||||
c.remainingLines = ""
|
||||
}
|
||||
c.lineNumber++
|
||||
}
|
||||
|
||||
// NewContext creates a context from the provided data. It strips out all
|
||||
// carriage returns and moves to the first line (if available).
|
||||
func NewContext(content []byte) context {
|
||||
c := context{remainingLines: strings.Replace(string(content), "\r", "", -1)}
|
||||
c.Increment()
|
||||
return c
|
||||
}
|
@@ -1,131 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package validate
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestNewContext(t *testing.T) {
|
||||
tests := []struct {
|
||||
in string
|
||||
|
||||
out context
|
||||
}{
|
||||
{
|
||||
out: context{
|
||||
currentLine: "",
|
||||
remainingLines: "",
|
||||
lineNumber: 0,
|
||||
},
|
||||
},
|
||||
{
|
||||
in: "this\r\nis\r\na\r\ntest",
|
||||
out: context{
|
||||
currentLine: "this",
|
||||
remainingLines: "is\na\ntest",
|
||||
lineNumber: 1,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
if out := NewContext([]byte(tt.in)); !reflect.DeepEqual(tt.out, out) {
|
||||
t.Errorf("bad context (%q): want %#v, got %#v", tt.in, tt.out, out)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestIncrement(t *testing.T) {
|
||||
tests := []struct {
|
||||
init context
|
||||
op func(c *context)
|
||||
|
||||
res context
|
||||
}{
|
||||
{
|
||||
init: context{
|
||||
currentLine: "",
|
||||
remainingLines: "",
|
||||
lineNumber: 0,
|
||||
},
|
||||
res: context{
|
||||
currentLine: "",
|
||||
remainingLines: "",
|
||||
lineNumber: 0,
|
||||
},
|
||||
op: func(c *context) {
|
||||
c.Increment()
|
||||
},
|
||||
},
|
||||
{
|
||||
init: context{
|
||||
currentLine: "test",
|
||||
remainingLines: "",
|
||||
lineNumber: 1,
|
||||
},
|
||||
res: context{
|
||||
currentLine: "",
|
||||
remainingLines: "",
|
||||
lineNumber: 2,
|
||||
},
|
||||
op: func(c *context) {
|
||||
c.Increment()
|
||||
c.Increment()
|
||||
c.Increment()
|
||||
},
|
||||
},
|
||||
{
|
||||
init: context{
|
||||
currentLine: "this",
|
||||
remainingLines: "is\na\ntest",
|
||||
lineNumber: 1,
|
||||
},
|
||||
res: context{
|
||||
currentLine: "is",
|
||||
remainingLines: "a\ntest",
|
||||
lineNumber: 2,
|
||||
},
|
||||
op: func(c *context) {
|
||||
c.Increment()
|
||||
},
|
||||
},
|
||||
{
|
||||
init: context{
|
||||
currentLine: "this",
|
||||
remainingLines: "is\na\ntest",
|
||||
lineNumber: 1,
|
||||
},
|
||||
res: context{
|
||||
currentLine: "test",
|
||||
remainingLines: "",
|
||||
lineNumber: 4,
|
||||
},
|
||||
op: func(c *context) {
|
||||
c.Increment()
|
||||
c.Increment()
|
||||
c.Increment()
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
res := tt.init
|
||||
if tt.op(&res); !reflect.DeepEqual(tt.res, res) {
|
||||
t.Errorf("bad context (%d, %#v): want %#v, got %#v", i, tt.init, tt.res, res)
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,157 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package validate
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"regexp"
|
||||
)
|
||||
|
||||
var (
|
||||
yamlKey = regexp.MustCompile(`^ *-? ?(?P<key>.*?):`)
|
||||
yamlElem = regexp.MustCompile(`^ *-`)
|
||||
)
|
||||
|
||||
type node struct {
|
||||
name string
|
||||
line int
|
||||
children []node
|
||||
field reflect.StructField
|
||||
reflect.Value
|
||||
}
|
||||
|
||||
// Child attempts to find the child with the given name in the node's list of
|
||||
// children. If no such child is found, an invalid node is returned.
|
||||
func (n node) Child(name string) node {
|
||||
for _, c := range n.children {
|
||||
if c.name == name {
|
||||
return c
|
||||
}
|
||||
}
|
||||
return node{}
|
||||
}
|
||||
|
||||
// HumanType returns the human-consumable string representation of the type of
|
||||
// the node.
|
||||
func (n node) HumanType() string {
|
||||
switch k := n.Kind(); k {
|
||||
case reflect.Slice:
|
||||
c := n.Type().Elem()
|
||||
return "[]" + node{Value: reflect.New(c).Elem()}.HumanType()
|
||||
default:
|
||||
return k.String()
|
||||
}
|
||||
}
|
||||
|
||||
// NewNode returns the node representation of the given value. The context
|
||||
// will be used in an attempt to determine line numbers for the given value.
|
||||
func NewNode(value interface{}, context context) node {
|
||||
var n node
|
||||
toNode(value, context, &n)
|
||||
return n
|
||||
}
|
||||
|
||||
// toNode converts the given value into a node and then recursively processes
|
||||
// each of the nodes components (e.g. fields, array elements, keys).
|
||||
func toNode(v interface{}, c context, n *node) {
|
||||
vv := reflect.ValueOf(v)
|
||||
if !vv.IsValid() {
|
||||
return
|
||||
}
|
||||
|
||||
n.Value = vv
|
||||
switch vv.Kind() {
|
||||
case reflect.Struct:
|
||||
// Walk over each field in the structure, skipping unexported fields,
|
||||
// and create a node for it.
|
||||
for i := 0; i < vv.Type().NumField(); i++ {
|
||||
ft := vv.Type().Field(i)
|
||||
k := ft.Tag.Get("yaml")
|
||||
if k == "-" || k == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
cn := node{name: k, field: ft}
|
||||
c, ok := findKey(cn.name, c)
|
||||
if ok {
|
||||
cn.line = c.lineNumber
|
||||
}
|
||||
toNode(vv.Field(i).Interface(), c, &cn)
|
||||
n.children = append(n.children, cn)
|
||||
}
|
||||
case reflect.Map:
|
||||
// Walk over each key in the map and create a node for it.
|
||||
v := v.(map[interface{}]interface{})
|
||||
for k, cv := range v {
|
||||
cn := node{name: fmt.Sprintf("%s", k)}
|
||||
c, ok := findKey(cn.name, c)
|
||||
if ok {
|
||||
cn.line = c.lineNumber
|
||||
}
|
||||
toNode(cv, c, &cn)
|
||||
n.children = append(n.children, cn)
|
||||
}
|
||||
case reflect.Slice:
|
||||
// Walk over each element in the slice and create a node for it.
|
||||
// While iterating over the slice, preserve the context after it
|
||||
// is modified. This allows the line numbers to reflect the current
|
||||
// element instead of the first.
|
||||
for i := 0; i < vv.Len(); i++ {
|
||||
cn := node{
|
||||
name: fmt.Sprintf("%s[%d]", n.name, i),
|
||||
field: n.field,
|
||||
}
|
||||
var ok bool
|
||||
c, ok = findElem(c)
|
||||
if ok {
|
||||
cn.line = c.lineNumber
|
||||
}
|
||||
toNode(vv.Index(i).Interface(), c, &cn)
|
||||
n.children = append(n.children, cn)
|
||||
c.Increment()
|
||||
}
|
||||
case reflect.String, reflect.Int, reflect.Bool, reflect.Float64:
|
||||
default:
|
||||
panic(fmt.Sprintf("toNode(): unhandled kind %s", vv.Kind()))
|
||||
}
|
||||
}
|
||||
|
||||
// findKey attempts to find the requested key within the provided context.
|
||||
// A modified copy of the context is returned with every line up to the key
|
||||
// incremented past. A boolean, true if the key was found, is also returned.
|
||||
func findKey(key string, context context) (context, bool) {
|
||||
return find(yamlKey, key, context)
|
||||
}
|
||||
|
||||
// findElem attempts to find an array element within the provided context.
|
||||
// A modified copy of the context is returned with every line up to the array
|
||||
// element incremented past. A boolean, true if the key was found, is also
|
||||
// returned.
|
||||
func findElem(context context) (context, bool) {
|
||||
return find(yamlElem, "", context)
|
||||
}
|
||||
|
||||
func find(exp *regexp.Regexp, key string, context context) (context, bool) {
|
||||
for len(context.currentLine) > 0 || len(context.remainingLines) > 0 {
|
||||
matches := exp.FindStringSubmatch(context.currentLine)
|
||||
if len(matches) > 0 && (key == "" || matches[1] == key) {
|
||||
return context, true
|
||||
}
|
||||
|
||||
context.Increment()
|
||||
}
|
||||
return context, false
|
||||
}
|
@@ -1,284 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package validate
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestChild(t *testing.T) {
|
||||
tests := []struct {
|
||||
parent node
|
||||
name string
|
||||
|
||||
child node
|
||||
}{
|
||||
{},
|
||||
{
|
||||
name: "c1",
|
||||
},
|
||||
{
|
||||
parent: node{
|
||||
children: []node{
|
||||
node{name: "c1"},
|
||||
node{name: "c2"},
|
||||
node{name: "c3"},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
parent: node{
|
||||
children: []node{
|
||||
node{name: "c1"},
|
||||
node{name: "c2"},
|
||||
node{name: "c3"},
|
||||
},
|
||||
},
|
||||
name: "c2",
|
||||
child: node{name: "c2"},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
if child := tt.parent.Child(tt.name); !reflect.DeepEqual(tt.child, child) {
|
||||
t.Errorf("bad child (%q): want %#v, got %#v", tt.name, tt.child, child)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestHumanType(t *testing.T) {
|
||||
tests := []struct {
|
||||
node node
|
||||
|
||||
humanType string
|
||||
}{
|
||||
{
|
||||
humanType: "invalid",
|
||||
},
|
||||
{
|
||||
node: node{Value: reflect.ValueOf("hello")},
|
||||
humanType: "string",
|
||||
},
|
||||
{
|
||||
node: node{
|
||||
Value: reflect.ValueOf([]int{1, 2}),
|
||||
children: []node{
|
||||
node{Value: reflect.ValueOf(1)},
|
||||
node{Value: reflect.ValueOf(2)},
|
||||
}},
|
||||
humanType: "[]int",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
if humanType := tt.node.HumanType(); tt.humanType != humanType {
|
||||
t.Errorf("bad type (%q): want %q, got %q", tt.node, tt.humanType, humanType)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestToNode(t *testing.T) {
|
||||
tests := []struct {
|
||||
value interface{}
|
||||
context context
|
||||
|
||||
node node
|
||||
}{
|
||||
{},
|
||||
{
|
||||
value: struct{}{},
|
||||
node: node{Value: reflect.ValueOf(struct{}{})},
|
||||
},
|
||||
{
|
||||
value: struct {
|
||||
A int `yaml:"a"`
|
||||
}{},
|
||||
node: node{
|
||||
children: []node{
|
||||
node{
|
||||
name: "a",
|
||||
field: reflect.TypeOf(struct {
|
||||
A int `yaml:"a"`
|
||||
}{}).Field(0),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
value: struct {
|
||||
A []int `yaml:"a"`
|
||||
}{},
|
||||
node: node{
|
||||
children: []node{
|
||||
node{
|
||||
name: "a",
|
||||
field: reflect.TypeOf(struct {
|
||||
A []int `yaml:"a"`
|
||||
}{}).Field(0),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
value: map[interface{}]interface{}{
|
||||
"a": map[interface{}]interface{}{
|
||||
"b": 2,
|
||||
},
|
||||
},
|
||||
context: NewContext([]byte("a:\n b: 2")),
|
||||
node: node{
|
||||
children: []node{
|
||||
node{
|
||||
line: 1,
|
||||
name: "a",
|
||||
children: []node{
|
||||
node{name: "b", line: 2},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
value: struct {
|
||||
A struct {
|
||||
Jon bool `yaml:"b"`
|
||||
} `yaml:"a"`
|
||||
}{},
|
||||
node: node{
|
||||
children: []node{
|
||||
node{
|
||||
name: "a",
|
||||
children: []node{
|
||||
node{
|
||||
name: "b",
|
||||
field: reflect.TypeOf(struct {
|
||||
Jon bool `yaml:"b"`
|
||||
}{}).Field(0),
|
||||
Value: reflect.ValueOf(false),
|
||||
},
|
||||
},
|
||||
field: reflect.TypeOf(struct {
|
||||
A struct {
|
||||
Jon bool `yaml:"b"`
|
||||
} `yaml:"a"`
|
||||
}{}).Field(0),
|
||||
Value: reflect.ValueOf(struct {
|
||||
Jon bool `yaml:"b"`
|
||||
}{}),
|
||||
},
|
||||
},
|
||||
Value: reflect.ValueOf(struct {
|
||||
A struct {
|
||||
Jon bool `yaml:"b"`
|
||||
} `yaml:"a"`
|
||||
}{}),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
var node node
|
||||
toNode(tt.value, tt.context, &node)
|
||||
if !nodesEqual(tt.node, node) {
|
||||
t.Errorf("bad node (%#v): want %#v, got %#v", tt.value, tt.node, node)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFindKey(t *testing.T) {
|
||||
tests := []struct {
|
||||
key string
|
||||
context context
|
||||
|
||||
found bool
|
||||
}{
|
||||
{},
|
||||
{
|
||||
key: "key1",
|
||||
context: NewContext([]byte("key1: hi")),
|
||||
found: true,
|
||||
},
|
||||
{
|
||||
key: "key2",
|
||||
context: NewContext([]byte("key1: hi")),
|
||||
found: false,
|
||||
},
|
||||
{
|
||||
key: "key3",
|
||||
context: NewContext([]byte("key1:\n key2:\n key3: hi")),
|
||||
found: true,
|
||||
},
|
||||
{
|
||||
key: "key4",
|
||||
context: NewContext([]byte("key1:\n - key4: hi")),
|
||||
found: true,
|
||||
},
|
||||
{
|
||||
key: "key5",
|
||||
context: NewContext([]byte("#key5")),
|
||||
found: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
if _, found := findKey(tt.key, tt.context); tt.found != found {
|
||||
t.Errorf("bad find (%q): want %t, got %t", tt.key, tt.found, found)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFindElem(t *testing.T) {
|
||||
tests := []struct {
|
||||
context context
|
||||
|
||||
found bool
|
||||
}{
|
||||
{},
|
||||
{
|
||||
context: NewContext([]byte("test: hi")),
|
||||
found: false,
|
||||
},
|
||||
{
|
||||
context: NewContext([]byte("test:\n - a\n -b")),
|
||||
found: true,
|
||||
},
|
||||
{
|
||||
context: NewContext([]byte("test:\n -\n a")),
|
||||
found: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
if _, found := findElem(tt.context); tt.found != found {
|
||||
t.Errorf("bad find (%q): want %t, got %t", tt.context, tt.found, found)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func nodesEqual(a, b node) bool {
|
||||
if a.name != b.name ||
|
||||
a.line != b.line ||
|
||||
!reflect.DeepEqual(a.field, b.field) ||
|
||||
len(a.children) != len(b.children) {
|
||||
return false
|
||||
}
|
||||
for i := 0; i < len(a.children); i++ {
|
||||
if !nodesEqual(a.children[i], b.children[i]) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
@@ -1,88 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package validate
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// Report represents the list of entries resulting from validation.
|
||||
type Report struct {
|
||||
entries []Entry
|
||||
}
|
||||
|
||||
// Error adds an error entry to the report.
|
||||
func (r *Report) Error(line int, message string) {
|
||||
r.entries = append(r.entries, Entry{entryError, message, line})
|
||||
}
|
||||
|
||||
// Warning adds a warning entry to the report.
|
||||
func (r *Report) Warning(line int, message string) {
|
||||
r.entries = append(r.entries, Entry{entryWarning, message, line})
|
||||
}
|
||||
|
||||
// Info adds an info entry to the report.
|
||||
func (r *Report) Info(line int, message string) {
|
||||
r.entries = append(r.entries, Entry{entryInfo, message, line})
|
||||
}
|
||||
|
||||
// Entries returns the list of entries in the report.
|
||||
func (r *Report) Entries() []Entry {
|
||||
return r.entries
|
||||
}
|
||||
|
||||
// Entry represents a single generic item in the report.
|
||||
type Entry struct {
|
||||
kind entryKind
|
||||
message string
|
||||
line int
|
||||
}
|
||||
|
||||
// String returns a human-readable representation of the entry.
|
||||
func (e Entry) String() string {
|
||||
return fmt.Sprintf("line %d: %s: %s", e.line, e.kind, e.message)
|
||||
}
|
||||
|
||||
// MarshalJSON satisfies the json.Marshaler interface, returning the entry
|
||||
// encoded as a JSON object.
|
||||
func (e Entry) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(map[string]interface{}{
|
||||
"kind": e.kind.String(),
|
||||
"message": e.message,
|
||||
"line": e.line,
|
||||
})
|
||||
}
|
||||
|
||||
type entryKind int
|
||||
|
||||
const (
|
||||
entryError entryKind = iota
|
||||
entryWarning
|
||||
entryInfo
|
||||
)
|
||||
|
||||
func (k entryKind) String() string {
|
||||
switch k {
|
||||
case entryError:
|
||||
return "error"
|
||||
case entryWarning:
|
||||
return "warning"
|
||||
case entryInfo:
|
||||
return "info"
|
||||
default:
|
||||
panic(fmt.Sprintf("invalid kind %d", k))
|
||||
}
|
||||
}
|
@@ -1,96 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package validate
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestEntry(t *testing.T) {
|
||||
tests := []struct {
|
||||
entry Entry
|
||||
|
||||
str string
|
||||
json []byte
|
||||
}{
|
||||
{
|
||||
Entry{entryInfo, "test info", 1},
|
||||
"line 1: info: test info",
|
||||
[]byte(`{"kind":"info","line":1,"message":"test info"}`),
|
||||
},
|
||||
{
|
||||
Entry{entryWarning, "test warning", 1},
|
||||
"line 1: warning: test warning",
|
||||
[]byte(`{"kind":"warning","line":1,"message":"test warning"}`),
|
||||
},
|
||||
{
|
||||
Entry{entryError, "test error", 2},
|
||||
"line 2: error: test error",
|
||||
[]byte(`{"kind":"error","line":2,"message":"test error"}`),
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
if str := tt.entry.String(); tt.str != str {
|
||||
t.Errorf("bad string (%q): want %q, got %q", tt.entry, tt.str, str)
|
||||
}
|
||||
json, err := tt.entry.MarshalJSON()
|
||||
if err != nil {
|
||||
t.Errorf("bad error (%q): want %v, got %q", tt.entry, nil, err)
|
||||
}
|
||||
if !bytes.Equal(tt.json, json) {
|
||||
t.Errorf("bad JSON (%q): want %q, got %q", tt.entry, tt.json, json)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestReport(t *testing.T) {
|
||||
type reportFunc struct {
|
||||
fn func(*Report, int, string)
|
||||
line int
|
||||
message string
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
fs []reportFunc
|
||||
|
||||
es []Entry
|
||||
}{
|
||||
{
|
||||
[]reportFunc{
|
||||
{(*Report).Warning, 1, "test warning 1"},
|
||||
{(*Report).Error, 2, "test error 2"},
|
||||
{(*Report).Info, 10, "test info 10"},
|
||||
},
|
||||
[]Entry{
|
||||
Entry{entryWarning, "test warning 1", 1},
|
||||
Entry{entryError, "test error 2", 2},
|
||||
Entry{entryInfo, "test info 10", 10},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
r := Report{}
|
||||
for _, f := range tt.fs {
|
||||
f.fn(&r, f.line, f.message)
|
||||
}
|
||||
if es := r.Entries(); !reflect.DeepEqual(tt.es, es) {
|
||||
t.Errorf("bad entries (%v): want %#v, got %#v", tt.fs, tt.es, es)
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,177 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package validate
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/url"
|
||||
"path"
|
||||
"reflect"
|
||||
"strings"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/config"
|
||||
)
|
||||
|
||||
type rule func(config node, report *Report)
|
||||
|
||||
// Rules contains all of the validation rules.
|
||||
var Rules []rule = []rule{
|
||||
checkDiscoveryUrl,
|
||||
checkEncoding,
|
||||
checkStructure,
|
||||
checkValidity,
|
||||
checkWriteFiles,
|
||||
checkWriteFilesUnderCoreos,
|
||||
}
|
||||
|
||||
// checkDiscoveryUrl verifies that the string is a valid url.
|
||||
func checkDiscoveryUrl(cfg node, report *Report) {
|
||||
c := cfg.Child("coreos").Child("etcd").Child("discovery")
|
||||
if !c.IsValid() {
|
||||
return
|
||||
}
|
||||
|
||||
if _, err := url.ParseRequestURI(c.String()); err != nil {
|
||||
report.Warning(c.line, "discovery URL is not valid")
|
||||
}
|
||||
}
|
||||
|
||||
// checkEncoding validates that, for each file under 'write_files', the
|
||||
// content can be decoded given the specified encoding.
|
||||
func checkEncoding(cfg node, report *Report) {
|
||||
for _, f := range cfg.Child("write_files").children {
|
||||
e := f.Child("encoding")
|
||||
if !e.IsValid() {
|
||||
continue
|
||||
}
|
||||
|
||||
c := f.Child("content")
|
||||
if _, err := config.DecodeContent(c.String(), e.String()); err != nil {
|
||||
report.Error(c.line, fmt.Sprintf("content cannot be decoded as %q", e.String()))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// checkStructure compares the provided config to the empty config.CloudConfig
|
||||
// structure. Each node is checked to make sure that it exists in the known
|
||||
// structure and that its type is compatible.
|
||||
func checkStructure(cfg node, report *Report) {
|
||||
g := NewNode(config.CloudConfig{}, NewContext([]byte{}))
|
||||
checkNodeStructure(cfg, g, report)
|
||||
}
|
||||
|
||||
func checkNodeStructure(n, g node, r *Report) {
|
||||
if !isCompatible(n.Kind(), g.Kind()) {
|
||||
r.Warning(n.line, fmt.Sprintf("incorrect type for %q (want %s)", n.name, g.HumanType()))
|
||||
return
|
||||
}
|
||||
|
||||
switch g.Kind() {
|
||||
case reflect.Struct:
|
||||
for _, cn := range n.children {
|
||||
if cg := g.Child(cn.name); cg.IsValid() {
|
||||
checkNodeStructure(cn, cg, r)
|
||||
} else {
|
||||
r.Warning(cn.line, fmt.Sprintf("unrecognized key %q", cn.name))
|
||||
}
|
||||
}
|
||||
case reflect.Slice:
|
||||
for _, cn := range n.children {
|
||||
var cg node
|
||||
c := g.Type().Elem()
|
||||
toNode(reflect.New(c).Elem().Interface(), context{}, &cg)
|
||||
checkNodeStructure(cn, cg, r)
|
||||
}
|
||||
case reflect.String, reflect.Int, reflect.Float64, reflect.Bool:
|
||||
default:
|
||||
panic(fmt.Sprintf("checkNodeStructure(): unhandled kind %s", g.Kind()))
|
||||
}
|
||||
}
|
||||
|
||||
// isCompatible determines if the type of kind n can be converted to the type
|
||||
// of kind g in the context of YAML. This is not an exhaustive list, but its
|
||||
// enough for the purposes of cloud-config validation.
|
||||
func isCompatible(n, g reflect.Kind) bool {
|
||||
switch g {
|
||||
case reflect.String:
|
||||
return n == reflect.String || n == reflect.Int || n == reflect.Float64 || n == reflect.Bool
|
||||
case reflect.Struct:
|
||||
return n == reflect.Struct || n == reflect.Map
|
||||
case reflect.Float64:
|
||||
return n == reflect.Float64 || n == reflect.Int
|
||||
case reflect.Bool, reflect.Slice, reflect.Int:
|
||||
return n == g
|
||||
default:
|
||||
panic(fmt.Sprintf("isCompatible(): unhandled kind %s", g))
|
||||
}
|
||||
}
|
||||
|
||||
// checkValidity checks the value of every node in the provided config by
|
||||
// running config.AssertValid() on it.
|
||||
func checkValidity(cfg node, report *Report) {
|
||||
g := NewNode(config.CloudConfig{}, NewContext([]byte{}))
|
||||
checkNodeValidity(cfg, g, report)
|
||||
}
|
||||
|
||||
func checkNodeValidity(n, g node, r *Report) {
|
||||
if err := config.AssertValid(n.Value, g.field.Tag.Get("valid")); err != nil {
|
||||
r.Error(n.line, fmt.Sprintf("invalid value %v", n.Value.Interface()))
|
||||
}
|
||||
switch g.Kind() {
|
||||
case reflect.Struct:
|
||||
for _, cn := range n.children {
|
||||
if cg := g.Child(cn.name); cg.IsValid() {
|
||||
checkNodeValidity(cn, cg, r)
|
||||
}
|
||||
}
|
||||
case reflect.Slice:
|
||||
for _, cn := range n.children {
|
||||
var cg node
|
||||
c := g.Type().Elem()
|
||||
toNode(reflect.New(c).Elem().Interface(), context{}, &cg)
|
||||
checkNodeValidity(cn, cg, r)
|
||||
}
|
||||
case reflect.String, reflect.Int, reflect.Float64, reflect.Bool:
|
||||
default:
|
||||
panic(fmt.Sprintf("checkNodeValidity(): unhandled kind %s", g.Kind()))
|
||||
}
|
||||
}
|
||||
|
||||
// checkWriteFiles checks to make sure that the target file can actually be
|
||||
// written. Note that this check is approximate (it only checks to see if the file
|
||||
// is under /usr).
|
||||
func checkWriteFiles(cfg node, report *Report) {
|
||||
for _, f := range cfg.Child("write_files").children {
|
||||
c := f.Child("path")
|
||||
if !c.IsValid() {
|
||||
continue
|
||||
}
|
||||
|
||||
d := path.Dir(c.String())
|
||||
switch {
|
||||
case strings.HasPrefix(d, "/usr"):
|
||||
report.Error(c.line, "file cannot be written to a read-only filesystem")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// checkWriteFilesUnderCoreos checks to see if the 'write_files' node is a
|
||||
// child of 'coreos' (it shouldn't be).
|
||||
func checkWriteFilesUnderCoreos(cfg node, report *Report) {
|
||||
c := cfg.Child("coreos").Child("write_files")
|
||||
if c.IsValid() {
|
||||
report.Info(c.line, "write_files doesn't belong under coreos")
|
||||
}
|
||||
}
|
@@ -1,399 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package validate
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestCheckDiscoveryUrl(t *testing.T) {
|
||||
tests := []struct {
|
||||
config string
|
||||
|
||||
entries []Entry
|
||||
}{
|
||||
{},
|
||||
{
|
||||
config: "coreos:\n etcd:\n discovery: https://discovery.etcd.io/00000000000000000000000000000000",
|
||||
},
|
||||
{
|
||||
config: "coreos:\n etcd:\n discovery: http://custom.domain/mytoken",
|
||||
},
|
||||
{
|
||||
config: "coreos:\n etcd:\n discovery: disco",
|
||||
entries: []Entry{{entryWarning, "discovery URL is not valid", 3}},
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
r := Report{}
|
||||
n, err := parseCloudConfig([]byte(tt.config), &r)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
checkDiscoveryUrl(n, &r)
|
||||
|
||||
if e := r.Entries(); !reflect.DeepEqual(tt.entries, e) {
|
||||
t.Errorf("bad report (%d, %q): want %#v, got %#v", i, tt.config, tt.entries, e)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestCheckEncoding(t *testing.T) {
|
||||
tests := []struct {
|
||||
config string
|
||||
|
||||
entries []Entry
|
||||
}{
|
||||
{},
|
||||
{
|
||||
config: "write_files:\n - encoding: base64\n content: aGVsbG8K",
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - content: !!binary aGVsbG8K",
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - encoding: base64\n content: !!binary aGVsbG8K",
|
||||
entries: []Entry{{entryError, `content cannot be decoded as "base64"`, 3}},
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - encoding: base64\n content: !!binary YUdWc2JHOEsK",
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - encoding: gzip\n content: !!binary H4sIAOC3tVQAA8tIzcnJ5wIAIDA6NgYAAAA=",
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - encoding: gzip+base64\n content: H4sIAOC3tVQAA8tIzcnJ5wIAIDA6NgYAAAA=",
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - encoding: custom\n content: hello",
|
||||
entries: []Entry{{entryError, `content cannot be decoded as "custom"`, 3}},
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
r := Report{}
|
||||
n, err := parseCloudConfig([]byte(tt.config), &r)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
checkEncoding(n, &r)
|
||||
|
||||
if e := r.Entries(); !reflect.DeepEqual(tt.entries, e) {
|
||||
t.Errorf("bad report (%d, %q): want %#v, got %#v", i, tt.config, tt.entries, e)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestCheckStructure(t *testing.T) {
|
||||
tests := []struct {
|
||||
config string
|
||||
|
||||
entries []Entry
|
||||
}{
|
||||
{},
|
||||
|
||||
// Test for unrecognized keys
|
||||
{
|
||||
config: "test:",
|
||||
entries: []Entry{{entryWarning, "unrecognized key \"test\"", 1}},
|
||||
},
|
||||
{
|
||||
config: "coreos:\n etcd:\n bad:",
|
||||
entries: []Entry{{entryWarning, "unrecognized key \"bad\"", 3}},
|
||||
},
|
||||
{
|
||||
config: "coreos:\n etcd:\n discovery: good",
|
||||
},
|
||||
|
||||
// Test for error on list of nodes
|
||||
{
|
||||
config: "coreos:\n units:\n - hello\n - goodbye",
|
||||
entries: []Entry{
|
||||
{entryWarning, "incorrect type for \"units[0]\" (want struct)", 3},
|
||||
{entryWarning, "incorrect type for \"units[1]\" (want struct)", 4},
|
||||
},
|
||||
},
|
||||
|
||||
// Test for incorrect types
|
||||
// Want boolean
|
||||
{
|
||||
config: "coreos:\n units:\n - enable: true",
|
||||
},
|
||||
{
|
||||
config: "coreos:\n units:\n - enable: 4",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"enable\" (want bool)", 3}},
|
||||
},
|
||||
{
|
||||
config: "coreos:\n units:\n - enable: bad",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"enable\" (want bool)", 3}},
|
||||
},
|
||||
{
|
||||
config: "coreos:\n units:\n - enable:\n bad:",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"enable\" (want bool)", 3}},
|
||||
},
|
||||
{
|
||||
config: "coreos:\n units:\n - enable:\n - bad",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"enable\" (want bool)", 3}},
|
||||
},
|
||||
// Want string
|
||||
{
|
||||
config: "hostname: true",
|
||||
},
|
||||
{
|
||||
config: "hostname: 4",
|
||||
},
|
||||
{
|
||||
config: "hostname: host",
|
||||
},
|
||||
{
|
||||
config: "hostname:\n name:",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"hostname\" (want string)", 1}},
|
||||
},
|
||||
{
|
||||
config: "hostname:\n - name",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"hostname\" (want string)", 1}},
|
||||
},
|
||||
// Want struct
|
||||
{
|
||||
config: "coreos: true",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"coreos\" (want struct)", 1}},
|
||||
},
|
||||
{
|
||||
config: "coreos: 4",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"coreos\" (want struct)", 1}},
|
||||
},
|
||||
{
|
||||
config: "coreos: hello",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"coreos\" (want struct)", 1}},
|
||||
},
|
||||
{
|
||||
config: "coreos:\n etcd:\n discovery: fire in the disco",
|
||||
},
|
||||
{
|
||||
config: "coreos:\n - hello",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"coreos\" (want struct)", 1}},
|
||||
},
|
||||
// Want []string
|
||||
{
|
||||
config: "ssh_authorized_keys: true",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"ssh_authorized_keys\" (want []string)", 1}},
|
||||
},
|
||||
{
|
||||
config: "ssh_authorized_keys: 4",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"ssh_authorized_keys\" (want []string)", 1}},
|
||||
},
|
||||
{
|
||||
config: "ssh_authorized_keys: key",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"ssh_authorized_keys\" (want []string)", 1}},
|
||||
},
|
||||
{
|
||||
config: "ssh_authorized_keys:\n key: value",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"ssh_authorized_keys\" (want []string)", 1}},
|
||||
},
|
||||
{
|
||||
config: "ssh_authorized_keys:\n - key",
|
||||
},
|
||||
{
|
||||
config: "ssh_authorized_keys:\n - key: value",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"ssh_authorized_keys[0]\" (want string)", 2}},
|
||||
},
|
||||
// Want []struct
|
||||
{
|
||||
config: "users:\n true",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"users\" (want []struct)", 1}},
|
||||
},
|
||||
{
|
||||
config: "users:\n 4",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"users\" (want []struct)", 1}},
|
||||
},
|
||||
{
|
||||
config: "users:\n bad",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"users\" (want []struct)", 1}},
|
||||
},
|
||||
{
|
||||
config: "users:\n bad:",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"users\" (want []struct)", 1}},
|
||||
},
|
||||
{
|
||||
config: "users:\n - name: good",
|
||||
},
|
||||
// Want struct within array
|
||||
{
|
||||
config: "users:\n - true",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"users[0]\" (want struct)", 2}},
|
||||
},
|
||||
{
|
||||
config: "users:\n - name: hi\n - true",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"users[1]\" (want struct)", 3}},
|
||||
},
|
||||
{
|
||||
config: "users:\n - 4",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"users[0]\" (want struct)", 2}},
|
||||
},
|
||||
{
|
||||
config: "users:\n - bad",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"users[0]\" (want struct)", 2}},
|
||||
},
|
||||
{
|
||||
config: "users:\n - - bad",
|
||||
entries: []Entry{{entryWarning, "incorrect type for \"users[0]\" (want struct)", 2}},
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
r := Report{}
|
||||
n, err := parseCloudConfig([]byte(tt.config), &r)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
checkStructure(n, &r)
|
||||
|
||||
if e := r.Entries(); !reflect.DeepEqual(tt.entries, e) {
|
||||
t.Errorf("bad report (%d, %q): want %#v, got %#v", i, tt.config, tt.entries, e)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestCheckValidity(t *testing.T) {
|
||||
tests := []struct {
|
||||
config string
|
||||
|
||||
entries []Entry
|
||||
}{
|
||||
// string
|
||||
{
|
||||
config: "hostname: test",
|
||||
},
|
||||
|
||||
// int
|
||||
{
|
||||
config: "coreos:\n fleet:\n verbosity: 2",
|
||||
},
|
||||
|
||||
// bool
|
||||
{
|
||||
config: "coreos:\n units:\n - enable: true",
|
||||
},
|
||||
|
||||
// slice
|
||||
{
|
||||
config: "coreos:\n units:\n - command: start\n - name: stop",
|
||||
},
|
||||
{
|
||||
config: "coreos:\n units:\n - command: lol",
|
||||
entries: []Entry{{entryError, "invalid value lol", 3}},
|
||||
},
|
||||
|
||||
// struct
|
||||
{
|
||||
config: "coreos:\n update:\n reboot_strategy: off",
|
||||
},
|
||||
{
|
||||
config: "coreos:\n update:\n reboot_strategy: always",
|
||||
entries: []Entry{{entryError, "invalid value always", 3}},
|
||||
},
|
||||
|
||||
// unknown
|
||||
{
|
||||
config: "unknown: hi",
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
r := Report{}
|
||||
n, err := parseCloudConfig([]byte(tt.config), &r)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
checkValidity(n, &r)
|
||||
|
||||
if e := r.Entries(); !reflect.DeepEqual(tt.entries, e) {
|
||||
t.Errorf("bad report (%d, %q): want %#v, got %#v", i, tt.config, tt.entries, e)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestCheckWriteFiles(t *testing.T) {
|
||||
tests := []struct {
|
||||
config string
|
||||
|
||||
entries []Entry
|
||||
}{
|
||||
{},
|
||||
{
|
||||
config: "write_files:\n - path: /valid",
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - path: /tmp/usr/valid",
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - path: /usr/invalid",
|
||||
entries: []Entry{{entryError, "file cannot be written to a read-only filesystem", 2}},
|
||||
},
|
||||
{
|
||||
config: "write-files:\n - path: /tmp/../usr/invalid",
|
||||
entries: []Entry{{entryError, "file cannot be written to a read-only filesystem", 2}},
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
r := Report{}
|
||||
n, err := parseCloudConfig([]byte(tt.config), &r)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
checkWriteFiles(n, &r)
|
||||
|
||||
if e := r.Entries(); !reflect.DeepEqual(tt.entries, e) {
|
||||
t.Errorf("bad report (%d, %q): want %#v, got %#v", i, tt.config, tt.entries, e)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestCheckWriteFilesUnderCoreos(t *testing.T) {
|
||||
tests := []struct {
|
||||
config string
|
||||
|
||||
entries []Entry
|
||||
}{
|
||||
{},
|
||||
{
|
||||
config: "write_files:\n - path: /hi",
|
||||
},
|
||||
{
|
||||
config: "coreos:\n write_files:\n - path: /hi",
|
||||
entries: []Entry{{entryInfo, "write_files doesn't belong under coreos", 2}},
|
||||
},
|
||||
{
|
||||
config: "coreos:\n write-files:\n - path: /hyphen",
|
||||
entries: []Entry{{entryInfo, "write_files doesn't belong under coreos", 2}},
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
r := Report{}
|
||||
n, err := parseCloudConfig([]byte(tt.config), &r)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
checkWriteFilesUnderCoreos(n, &r)
|
||||
|
||||
if e := r.Entries(); !reflect.DeepEqual(tt.entries, e) {
|
||||
t.Errorf("bad report (%d, %q): want %#v, got %#v", i, tt.config, tt.entries, e)
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,162 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package validate
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/config"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/coreos/yaml"
|
||||
)
|
||||
|
||||
var (
|
||||
yamlLineError = regexp.MustCompile(`^YAML error: line (?P<line>[[:digit:]]+): (?P<msg>.*)$`)
|
||||
yamlError = regexp.MustCompile(`^YAML error: (?P<msg>.*)$`)
|
||||
)
|
||||
|
||||
// Validate runs a series of validation tests against the given userdata and
|
||||
// returns a report detailing all of the issues. Presently, only cloud-configs
|
||||
// can be validated.
|
||||
func Validate(userdataBytes []byte) (Report, error) {
|
||||
switch {
|
||||
case len(userdataBytes) == 0:
|
||||
return Report{}, nil
|
||||
case config.IsScript(string(userdataBytes)):
|
||||
return Report{}, nil
|
||||
case config.IsCloudConfig(string(userdataBytes)):
|
||||
return validateCloudConfig(userdataBytes, Rules)
|
||||
default:
|
||||
return Report{entries: []Entry{
|
||||
Entry{kind: entryError, message: `must be "#cloud-config" or begin with "#!"`, line: 1},
|
||||
}}, nil
|
||||
}
|
||||
}
|
||||
|
||||
// validateCloudConfig runs all of the validation rules in Rules and returns
|
||||
// the resulting report and any errors encountered.
|
||||
func validateCloudConfig(config []byte, rules []rule) (report Report, err error) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
err = fmt.Errorf("%v", r)
|
||||
}
|
||||
}()
|
||||
|
||||
c, err := parseCloudConfig(config, &report)
|
||||
if err != nil {
|
||||
return report, err
|
||||
}
|
||||
|
||||
for _, r := range rules {
|
||||
r(c, &report)
|
||||
}
|
||||
return report, nil
|
||||
}
|
||||
|
||||
// parseCloudConfig parses the provided config into a node structure and logs
|
||||
// any parsing issues into the provided report. Unrecoverable errors are
|
||||
// returned as an error.
|
||||
func parseCloudConfig(cfg []byte, report *Report) (node, error) {
|
||||
yaml.UnmarshalMappingKeyTransform = func(nameIn string) (nameOut string) {
|
||||
return nameIn
|
||||
}
|
||||
// unmarshal the config into an implicitly-typed form. The yaml library
|
||||
// will implicitly convert types into their normalized form
|
||||
// (e.g. 0744 -> 484, off -> false).
|
||||
var weak map[interface{}]interface{}
|
||||
if err := yaml.Unmarshal(cfg, &weak); err != nil {
|
||||
matches := yamlLineError.FindStringSubmatch(err.Error())
|
||||
if len(matches) == 3 {
|
||||
line, err := strconv.Atoi(matches[1])
|
||||
if err != nil {
|
||||
return node{}, err
|
||||
}
|
||||
msg := matches[2]
|
||||
report.Error(line, msg)
|
||||
return node{}, nil
|
||||
}
|
||||
|
||||
matches = yamlError.FindStringSubmatch(err.Error())
|
||||
if len(matches) == 2 {
|
||||
report.Error(1, matches[1])
|
||||
return node{}, nil
|
||||
}
|
||||
|
||||
return node{}, errors.New("couldn't parse yaml error")
|
||||
}
|
||||
w := NewNode(weak, NewContext(cfg))
|
||||
w = normalizeNodeNames(w, report)
|
||||
|
||||
// unmarshal the config into the explicitly-typed form.
|
||||
yaml.UnmarshalMappingKeyTransform = func(nameIn string) (nameOut string) {
|
||||
return strings.Replace(nameIn, "-", "_", -1)
|
||||
}
|
||||
var strong config.CloudConfig
|
||||
if err := yaml.Unmarshal([]byte(cfg), &strong); err != nil {
|
||||
return node{}, err
|
||||
}
|
||||
s := NewNode(strong, NewContext(cfg))
|
||||
|
||||
// coerceNodes weak nodes and strong nodes. strong nodes replace weak nodes
|
||||
// if they are compatible types (this happens when the yaml library
|
||||
// converts the input).
|
||||
// (e.g. weak 484 is replaced by strong 0744, weak 4 is not replaced by
|
||||
// strong false)
|
||||
return coerceNodes(w, s), nil
|
||||
}
|
||||
|
||||
// coerceNodes recursively evaluates two nodes, returning a new node containing
|
||||
// either the weak or strong node's value and its recursively processed
|
||||
// children. The strong node's value is used if the two nodes are leafs, are
|
||||
// both valid, and are compatible types (defined by isCompatible()). The weak
|
||||
// node is returned in all other cases. coerceNodes is used to counteract the
|
||||
// effects of yaml's automatic type conversion. The weak node is the one
|
||||
// resulting from unmarshalling into an empty interface{} (the type is
|
||||
// inferred). The strong node is the one resulting from unmarshalling into a
|
||||
// struct. If the two nodes are of compatible types, the yaml library correctly
|
||||
// parsed the value into the strongly typed unmarshalling. In this case, we
|
||||
// prefer the strong node because its actually the type we are expecting.
|
||||
func coerceNodes(w, s node) node {
|
||||
n := w
|
||||
n.children = nil
|
||||
if len(w.children) == 0 && len(s.children) == 0 &&
|
||||
w.IsValid() && s.IsValid() &&
|
||||
isCompatible(w.Kind(), s.Kind()) {
|
||||
n.Value = s.Value
|
||||
}
|
||||
|
||||
for _, cw := range w.children {
|
||||
n.children = append(n.children, coerceNodes(cw, s.Child(cw.name)))
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
// normalizeNodeNames replaces all occurences of '-' with '_' within key names
|
||||
// and makes a note of each replacement in the report.
|
||||
func normalizeNodeNames(node node, report *Report) node {
|
||||
if strings.Contains(node.name, "-") {
|
||||
// TODO(crawford): Enable this message once the new validator hits stable.
|
||||
//report.Info(node.line, fmt.Sprintf("%q uses '-' instead of '_'", node.name))
|
||||
node.name = strings.Replace(node.name, "-", "_", -1)
|
||||
}
|
||||
for i := range node.children {
|
||||
node.children[i] = normalizeNodeNames(node.children[i], report)
|
||||
}
|
||||
return node
|
||||
}
|
@@ -1,167 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package validate
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestParseCloudConfig(t *testing.T) {
|
||||
tests := []struct {
|
||||
config string
|
||||
|
||||
entries []Entry
|
||||
}{
|
||||
{},
|
||||
{
|
||||
config: " ",
|
||||
entries: []Entry{{entryError, "found character that cannot start any token", 1}},
|
||||
},
|
||||
{
|
||||
config: "a:\na",
|
||||
entries: []Entry{{entryError, "could not find expected ':'", 2}},
|
||||
},
|
||||
{
|
||||
config: "#hello\na:\na",
|
||||
entries: []Entry{{entryError, "could not find expected ':'", 3}},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
r := Report{}
|
||||
parseCloudConfig([]byte(tt.config), &r)
|
||||
|
||||
if e := r.Entries(); !reflect.DeepEqual(tt.entries, e) {
|
||||
t.Errorf("bad report (%s): want %#v, got %#v", tt.config, tt.entries, e)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidateCloudConfig(t *testing.T) {
|
||||
tests := []struct {
|
||||
config string
|
||||
rules []rule
|
||||
|
||||
report Report
|
||||
err error
|
||||
}{
|
||||
{
|
||||
rules: []rule{func(_ node, _ *Report) { panic("something happened") }},
|
||||
err: errors.New("something happened"),
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - permissions: 0744",
|
||||
rules: Rules,
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - permissions: '0744'",
|
||||
rules: Rules,
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - permissions: 744",
|
||||
rules: Rules,
|
||||
},
|
||||
{
|
||||
config: "write_files:\n - permissions: '744'",
|
||||
rules: Rules,
|
||||
},
|
||||
{
|
||||
config: "coreos:\n update:\n reboot-strategy: off",
|
||||
rules: Rules,
|
||||
},
|
||||
{
|
||||
config: "coreos:\n update:\n reboot-strategy: false",
|
||||
rules: Rules,
|
||||
report: Report{entries: []Entry{{entryError, "invalid value false", 3}}},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
r, err := validateCloudConfig([]byte(tt.config), tt.rules)
|
||||
if !reflect.DeepEqual(tt.err, err) {
|
||||
t.Errorf("bad error (%s): want %v, got %v", tt.config, tt.err, err)
|
||||
}
|
||||
if !reflect.DeepEqual(tt.report, r) {
|
||||
t.Errorf("bad report (%s): want %+v, got %+v", tt.config, tt.report, r)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidate(t *testing.T) {
|
||||
tests := []struct {
|
||||
config string
|
||||
|
||||
report Report
|
||||
}{
|
||||
{},
|
||||
{
|
||||
config: "#!/bin/bash\necho hey",
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
r, err := Validate([]byte(tt.config))
|
||||
if err != nil {
|
||||
t.Errorf("bad error (case #%d): want %v, got %v", i, nil, err)
|
||||
}
|
||||
if !reflect.DeepEqual(tt.report, r) {
|
||||
t.Errorf("bad report (case #%d): want %+v, got %+v", i, tt.report, r)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkValidate(b *testing.B) {
|
||||
config := `#cloud-config
|
||||
hostname: test
|
||||
|
||||
coreos:
|
||||
etcd:
|
||||
name: node001
|
||||
discovery: https://discovery.etcd.io/disco
|
||||
addr: $public_ipv4:4001
|
||||
peer-addr: $private_ipv4:7001
|
||||
fleet:
|
||||
verbosity: 2
|
||||
metadata: "hi"
|
||||
update:
|
||||
reboot-strategy: off
|
||||
units:
|
||||
- name: hi.service
|
||||
command: start
|
||||
enable: true
|
||||
- name: bye.service
|
||||
command: stop
|
||||
|
||||
ssh_authorized_keys:
|
||||
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h...
|
||||
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h...
|
||||
|
||||
users:
|
||||
- name: me
|
||||
|
||||
write_files:
|
||||
- path: /etc/yes
|
||||
content: "Hi"
|
||||
|
||||
manage_etc_hosts: localhost`
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
if _, err := Validate([]byte(config)); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,364 +1,95 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"flag"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"sync"
|
||||
"time"
|
||||
"log"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/config"
|
||||
"github.com/coreos/coreos-cloudinit/config/validate"
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/configdrive"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/file"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata/cloudsigma"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata/digitalocean"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata/ec2"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/proc_cmdline"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/url"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/waagent"
|
||||
"github.com/coreos/coreos-cloudinit/initialize"
|
||||
"github.com/coreos/coreos-cloudinit/network"
|
||||
"github.com/coreos/coreos-cloudinit/pkg"
|
||||
"github.com/coreos/coreos-cloudinit/system"
|
||||
"github.com/coreos/coreos-cloudinit/cloudinit"
|
||||
)
|
||||
|
||||
const (
|
||||
version = "1.3.4"
|
||||
datasourceInterval = 100 * time.Millisecond
|
||||
datasourceMaxInterval = 30 * time.Second
|
||||
datasourceTimeout = 5 * time.Minute
|
||||
)
|
||||
|
||||
var (
|
||||
flags = struct {
|
||||
printVersion bool
|
||||
ignoreFailure bool
|
||||
sources struct {
|
||||
file string
|
||||
configDrive string
|
||||
waagent string
|
||||
metadataService bool
|
||||
ec2MetadataService string
|
||||
cloudSigmaMetadataService bool
|
||||
digitalOceanMetadataService string
|
||||
url string
|
||||
procCmdLine bool
|
||||
}
|
||||
convertNetconf string
|
||||
workspace string
|
||||
sshKeyName string
|
||||
oem string
|
||||
validate bool
|
||||
}{}
|
||||
)
|
||||
|
||||
func init() {
|
||||
flag.BoolVar(&flags.printVersion, "version", false, "Print the version and exit")
|
||||
flag.BoolVar(&flags.ignoreFailure, "ignore-failure", false, "Exits with 0 status in the event of malformed input from user-data")
|
||||
flag.StringVar(&flags.sources.file, "from-file", "", "Read user-data from provided file")
|
||||
flag.StringVar(&flags.sources.configDrive, "from-configdrive", "", "Read data from provided cloud-drive directory")
|
||||
flag.StringVar(&flags.sources.waagent, "from-waagent", "", "Read data from provided waagent directory")
|
||||
flag.BoolVar(&flags.sources.metadataService, "from-metadata-service", false, "[DEPRECATED - Use -from-ec2-metadata] Download data from metadata service")
|
||||
flag.StringVar(&flags.sources.ec2MetadataService, "from-ec2-metadata", "", "Download EC2 data from the provided url")
|
||||
flag.BoolVar(&flags.sources.cloudSigmaMetadataService, "from-cloudsigma-metadata", false, "Download data from CloudSigma server context")
|
||||
flag.StringVar(&flags.sources.digitalOceanMetadataService, "from-digitalocean-metadata", "", "Download DigitalOcean data from the provided url")
|
||||
flag.StringVar(&flags.sources.url, "from-url", "", "Download user-data from provided url")
|
||||
flag.BoolVar(&flags.sources.procCmdLine, "from-proc-cmdline", false, fmt.Sprintf("Parse %s for '%s=<url>', using the cloud-config served by an HTTP GET to <url>", proc_cmdline.ProcCmdlineLocation, proc_cmdline.ProcCmdlineCloudConfigFlag))
|
||||
flag.StringVar(&flags.oem, "oem", "", "Use the settings specific to the provided OEM")
|
||||
flag.StringVar(&flags.convertNetconf, "convert-netconf", "", "Read the network config provided in cloud-drive and translate it from the specified format into networkd unit files")
|
||||
flag.StringVar(&flags.workspace, "workspace", "/var/lib/coreos-cloudinit", "Base directory coreos-cloudinit should use to store data")
|
||||
flag.StringVar(&flags.sshKeyName, "ssh-key-name", initialize.DefaultSSHKeyName, "Add SSH keys to the system with the given name")
|
||||
flag.BoolVar(&flags.validate, "validate", false, "[EXPERIMENTAL] Validate the user-data but do not apply it to the system")
|
||||
}
|
||||
|
||||
type oemConfig map[string]string
|
||||
|
||||
var (
|
||||
oemConfigs = map[string]oemConfig{
|
||||
"digitalocean": oemConfig{
|
||||
"from-digitalocean-metadata": "http://169.254.169.254/",
|
||||
"convert-netconf": "digitalocean",
|
||||
},
|
||||
"ec2-compat": oemConfig{
|
||||
"from-ec2-metadata": "http://169.254.169.254/",
|
||||
"from-configdrive": "/media/configdrive",
|
||||
},
|
||||
"rackspace-onmetal": oemConfig{
|
||||
"from-configdrive": "/media/configdrive",
|
||||
"convert-netconf": "debian",
|
||||
},
|
||||
"azure": oemConfig{
|
||||
"from-waagent": "/var/lib/waagent",
|
||||
},
|
||||
"cloudsigma": oemConfig{
|
||||
"from-cloudsigma-metadata": "true",
|
||||
},
|
||||
}
|
||||
)
|
||||
const version = "0.1.2+git"
|
||||
|
||||
func main() {
|
||||
failure := false
|
||||
var userdata []byte
|
||||
var err error
|
||||
|
||||
var printVersion bool
|
||||
flag.BoolVar(&printVersion, "version", false, "Print the version and exit")
|
||||
|
||||
var file string
|
||||
flag.StringVar(&file, "from-file", "", "Read user-data from provided file")
|
||||
|
||||
var url string
|
||||
flag.StringVar(&url, "from-url", "", "Download user-data from provided url")
|
||||
|
||||
var workspace string
|
||||
flag.StringVar(&workspace, "workspace", "/var/lib/coreos-cloudinit", "Base directory coreos-cloudinit should use to store data")
|
||||
|
||||
var sshKeyName string
|
||||
flag.StringVar(&sshKeyName, "ssh-key-name", cloudinit.DefaultSSHKeyName, "Add SSH keys to the system with the given name")
|
||||
|
||||
flag.Parse()
|
||||
|
||||
if c, ok := oemConfigs[flags.oem]; ok {
|
||||
for k, v := range c {
|
||||
flag.Set(k, v)
|
||||
}
|
||||
} else if flags.oem != "" {
|
||||
oems := make([]string, 0, len(oemConfigs))
|
||||
for k := range oemConfigs {
|
||||
oems = append(oems, k)
|
||||
}
|
||||
fmt.Printf("Invalid option to --oem: %q. Supported options: %q\n", flags.oem, oems)
|
||||
os.Exit(2)
|
||||
}
|
||||
|
||||
if flags.printVersion == true {
|
||||
if printVersion == true {
|
||||
fmt.Printf("coreos-cloudinit version %s\n", version)
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
switch flags.convertNetconf {
|
||||
case "":
|
||||
case "debian":
|
||||
case "digitalocean":
|
||||
default:
|
||||
fmt.Printf("Invalid option to -convert-netconf: '%s'. Supported options: 'debian, digitalocean'\n", flags.convertNetconf)
|
||||
os.Exit(2)
|
||||
}
|
||||
|
||||
dss := getDatasources()
|
||||
if len(dss) == 0 {
|
||||
fmt.Println("Provide at least one of --from-file, --from-configdrive, --from-ec2-metadata, --from-cloudsigma-metadata, --from-url or --from-proc-cmdline")
|
||||
os.Exit(2)
|
||||
}
|
||||
|
||||
ds := selectDatasource(dss)
|
||||
if ds == nil {
|
||||
fmt.Println("No datasources available in time")
|
||||
if file != "" && url != "" {
|
||||
fmt.Println("Provide one of --from-file or --from-url")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
fmt.Printf("Fetching user-data from datasource of type %q\n", ds.Type())
|
||||
userdataBytes, err := ds.FetchUserdata()
|
||||
if err != nil {
|
||||
fmt.Printf("Failed fetching user-data from datasource: %v\nContinuing...\n", err)
|
||||
failure = true
|
||||
}
|
||||
|
||||
if report, err := validate.Validate(userdataBytes); err == nil {
|
||||
ret := 0
|
||||
for _, e := range report.Entries() {
|
||||
fmt.Println(e)
|
||||
ret = 1
|
||||
}
|
||||
if flags.validate {
|
||||
os.Exit(ret)
|
||||
}
|
||||
} else {
|
||||
fmt.Printf("Failed while validating user_data (%q)\n", err)
|
||||
if flags.validate {
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf("Fetching meta-data from datasource of type %q\n", ds.Type())
|
||||
metadata, err := ds.FetchMetadata()
|
||||
if err != nil {
|
||||
fmt.Printf("Failed fetching meta-data from datasource: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Apply environment to user-data
|
||||
env := initialize.NewEnvironment("/", ds.ConfigRoot(), flags.workspace, flags.sshKeyName, metadata)
|
||||
userdata := env.Apply(string(userdataBytes))
|
||||
|
||||
var ccu *config.CloudConfig
|
||||
var script *config.Script
|
||||
if ud, err := initialize.ParseUserData(userdata); err != nil {
|
||||
fmt.Printf("Failed to parse user-data: %v\nContinuing...\n", err)
|
||||
failure = true
|
||||
} else {
|
||||
switch t := ud.(type) {
|
||||
case *config.CloudConfig:
|
||||
ccu = t
|
||||
case *config.Script:
|
||||
script = t
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Println("Merging cloud-config from meta-data and user-data")
|
||||
cc := mergeConfigs(ccu, metadata)
|
||||
|
||||
var ifaces []network.InterfaceGenerator
|
||||
if flags.convertNetconf != "" {
|
||||
var err error
|
||||
switch flags.convertNetconf {
|
||||
case "debian":
|
||||
ifaces, err = network.ProcessDebianNetconf(metadata.NetworkConfig)
|
||||
case "digitalocean":
|
||||
ifaces, err = network.ProcessDigitalOceanNetconf(metadata.NetworkConfig)
|
||||
default:
|
||||
err = fmt.Errorf("Unsupported network config format %q", flags.convertNetconf)
|
||||
}
|
||||
if file != "" {
|
||||
log.Printf("Reading user-data from file: %s", file)
|
||||
userdata, err = ioutil.ReadFile(file)
|
||||
if err != nil {
|
||||
fmt.Printf("Failed to generate interfaces: %v\n", err)
|
||||
os.Exit(1)
|
||||
log.Fatal(err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
if err = initialize.Apply(cc, ifaces, env); err != nil {
|
||||
fmt.Printf("Failed to apply cloud-config: %v\n", err)
|
||||
} else if url != "" {
|
||||
log.Printf("Reading user-data from metadata service")
|
||||
svc := cloudinit.NewMetadataService(url)
|
||||
userdata, err = svc.UserData()
|
||||
if err != nil {
|
||||
log.Fatal(err.Error())
|
||||
}
|
||||
} else {
|
||||
fmt.Println("Provide one of --from-file or --from-url")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if script != nil {
|
||||
if err = runScript(*script, env); err != nil {
|
||||
fmt.Printf("Failed to run script: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
if len(userdata) == 0 {
|
||||
log.Printf("No user data to handle, exiting.")
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
if failure && !flags.ignoreFailure {
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
// mergeConfigs merges certain options from md (meta-data from the datasource)
|
||||
// onto cc (a CloudConfig derived from user-data), if they are not already set
|
||||
// on cc (i.e. user-data always takes precedence)
|
||||
func mergeConfigs(cc *config.CloudConfig, md datasource.Metadata) (out config.CloudConfig) {
|
||||
if cc != nil {
|
||||
out = *cc
|
||||
}
|
||||
|
||||
if md.Hostname != "" {
|
||||
if out.Hostname != "" {
|
||||
fmt.Printf("Warning: user-data hostname (%s) overrides metadata hostname (%s)\n", out.Hostname, md.Hostname)
|
||||
} else {
|
||||
out.Hostname = md.Hostname
|
||||
}
|
||||
}
|
||||
for _, key := range md.SSHPublicKeys {
|
||||
out.SSHAuthorizedKeys = append(out.SSHAuthorizedKeys, key)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// getDatasources creates a slice of possible Datasources for cloudinit based
|
||||
// on the different source command-line flags.
|
||||
func getDatasources() []datasource.Datasource {
|
||||
dss := make([]datasource.Datasource, 0, 5)
|
||||
if flags.sources.file != "" {
|
||||
dss = append(dss, file.NewDatasource(flags.sources.file))
|
||||
}
|
||||
if flags.sources.url != "" {
|
||||
dss = append(dss, url.NewDatasource(flags.sources.url))
|
||||
}
|
||||
if flags.sources.configDrive != "" {
|
||||
dss = append(dss, configdrive.NewDatasource(flags.sources.configDrive))
|
||||
}
|
||||
if flags.sources.metadataService {
|
||||
dss = append(dss, ec2.NewDatasource(ec2.DefaultAddress))
|
||||
}
|
||||
if flags.sources.ec2MetadataService != "" {
|
||||
dss = append(dss, ec2.NewDatasource(flags.sources.ec2MetadataService))
|
||||
}
|
||||
if flags.sources.cloudSigmaMetadataService {
|
||||
dss = append(dss, cloudsigma.NewServerContextService())
|
||||
}
|
||||
if flags.sources.digitalOceanMetadataService != "" {
|
||||
dss = append(dss, digitalocean.NewDatasource(flags.sources.digitalOceanMetadataService))
|
||||
}
|
||||
if flags.sources.waagent != "" {
|
||||
dss = append(dss, waagent.NewDatasource(flags.sources.waagent))
|
||||
}
|
||||
if flags.sources.procCmdLine {
|
||||
dss = append(dss, proc_cmdline.NewDatasource())
|
||||
}
|
||||
return dss
|
||||
}
|
||||
|
||||
// selectDatasource attempts to choose a valid Datasource to use based on its
|
||||
// current availability. The first Datasource to report to be available is
|
||||
// returned. Datasources will be retried if possible if they are not
|
||||
// immediately available. If all Datasources are permanently unavailable or
|
||||
// datasourceTimeout is reached before one becomes available, nil is returned.
|
||||
func selectDatasource(sources []datasource.Datasource) datasource.Datasource {
|
||||
ds := make(chan datasource.Datasource)
|
||||
stop := make(chan struct{})
|
||||
var wg sync.WaitGroup
|
||||
|
||||
for _, s := range sources {
|
||||
wg.Add(1)
|
||||
go func(s datasource.Datasource) {
|
||||
defer wg.Done()
|
||||
|
||||
duration := datasourceInterval
|
||||
for {
|
||||
fmt.Printf("Checking availability of %q\n", s.Type())
|
||||
if s.IsAvailable() {
|
||||
ds <- s
|
||||
return
|
||||
} else if !s.AvailabilityChanges() {
|
||||
return
|
||||
}
|
||||
select {
|
||||
case <-stop:
|
||||
return
|
||||
case <-time.After(duration):
|
||||
duration = pkg.ExpBackoff(duration, datasourceMaxInterval)
|
||||
}
|
||||
}
|
||||
}(s)
|
||||
}
|
||||
|
||||
done := make(chan struct{})
|
||||
go func() {
|
||||
wg.Wait()
|
||||
close(done)
|
||||
}()
|
||||
|
||||
var s datasource.Datasource
|
||||
select {
|
||||
case s = <-ds:
|
||||
case <-done:
|
||||
case <-time.After(datasourceTimeout):
|
||||
}
|
||||
|
||||
close(stop)
|
||||
return s
|
||||
}
|
||||
|
||||
// TODO(jonboulle): this should probably be refactored and moved into a different module
|
||||
func runScript(script config.Script, env *initialize.Environment) error {
|
||||
err := initialize.PrepWorkspace(env.Workspace())
|
||||
parsed, err := cloudinit.ParseUserData(userdata)
|
||||
if err != nil {
|
||||
fmt.Printf("Failed preparing workspace: %v\n", err)
|
||||
return err
|
||||
log.Fatalf("Failed parsing user-data: %v", err)
|
||||
}
|
||||
path, err := initialize.PersistScriptInWorkspace(script, env.Workspace())
|
||||
if err == nil {
|
||||
var name string
|
||||
name, err = system.ExecuteScript(path)
|
||||
initialize.PersistUnitNameInWorkspace(name, env.Workspace())
|
||||
|
||||
err = cloudinit.PrepWorkspace(workspace)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed preparing workspace: %v", err)
|
||||
}
|
||||
|
||||
switch t := parsed.(type) {
|
||||
case cloudinit.CloudConfig:
|
||||
err = cloudinit.ApplyCloudConfig(t, sshKeyName)
|
||||
case cloudinit.Script:
|
||||
var path string
|
||||
path, err = cloudinit.PersistScriptInWorkspace(t, workspace)
|
||||
if err == nil {
|
||||
var name string
|
||||
name, err = cloudinit.ExecuteScript(path)
|
||||
cloudinit.PersistScriptUnitNameInWorkspace(name, workspace)
|
||||
}
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
log.Fatalf("Failed resolving user-data: %v", err)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
@@ -1,89 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/config"
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
)
|
||||
|
||||
func TestMergeConfigs(t *testing.T) {
|
||||
tests := []struct {
|
||||
cc *config.CloudConfig
|
||||
md datasource.Metadata
|
||||
|
||||
out config.CloudConfig
|
||||
}{
|
||||
{
|
||||
// If md is empty and cc is nil, result should be empty
|
||||
out: config.CloudConfig{},
|
||||
},
|
||||
{
|
||||
// If md and cc are empty, result should be empty
|
||||
cc: &config.CloudConfig{},
|
||||
out: config.CloudConfig{},
|
||||
},
|
||||
{
|
||||
// If cc is empty, cc should be returned unchanged
|
||||
cc: &config.CloudConfig{SSHAuthorizedKeys: []string{"abc", "def"}, Hostname: "cc-host"},
|
||||
out: config.CloudConfig{SSHAuthorizedKeys: []string{"abc", "def"}, Hostname: "cc-host"},
|
||||
},
|
||||
{
|
||||
// If cc is empty, cc should be returned unchanged(overridden)
|
||||
cc: &config.CloudConfig{},
|
||||
md: datasource.Metadata{Hostname: "md-host", SSHPublicKeys: map[string]string{"key": "ghi"}},
|
||||
out: config.CloudConfig{SSHAuthorizedKeys: []string{"ghi"}, Hostname: "md-host"},
|
||||
},
|
||||
{
|
||||
// If cc is nil, cc should be returned unchanged(overridden)
|
||||
md: datasource.Metadata{Hostname: "md-host", SSHPublicKeys: map[string]string{"key": "ghi"}},
|
||||
out: config.CloudConfig{SSHAuthorizedKeys: []string{"ghi"}, Hostname: "md-host"},
|
||||
},
|
||||
{
|
||||
// user-data should override completely in the case of conflicts
|
||||
cc: &config.CloudConfig{SSHAuthorizedKeys: []string{"abc", "def"}, Hostname: "cc-host"},
|
||||
md: datasource.Metadata{Hostname: "md-host"},
|
||||
out: config.CloudConfig{SSHAuthorizedKeys: []string{"abc", "def"}, Hostname: "cc-host"},
|
||||
},
|
||||
{
|
||||
// Mixed merge should succeed
|
||||
cc: &config.CloudConfig{SSHAuthorizedKeys: []string{"abc", "def"}, Hostname: "cc-host"},
|
||||
md: datasource.Metadata{Hostname: "md-host", SSHPublicKeys: map[string]string{"key": "ghi"}},
|
||||
out: config.CloudConfig{SSHAuthorizedKeys: []string{"abc", "def", "ghi"}, Hostname: "cc-host"},
|
||||
},
|
||||
{
|
||||
// Completely non-conflicting merge should be fine
|
||||
cc: &config.CloudConfig{Hostname: "cc-host"},
|
||||
md: datasource.Metadata{SSHPublicKeys: map[string]string{"zaphod": "beeblebrox"}},
|
||||
out: config.CloudConfig{Hostname: "cc-host", SSHAuthorizedKeys: []string{"beeblebrox"}},
|
||||
},
|
||||
{
|
||||
// Non-mergeable settings in user-data should not be affected
|
||||
cc: &config.CloudConfig{Hostname: "cc-host", ManageEtcHosts: config.EtcHosts("lolz")},
|
||||
md: datasource.Metadata{Hostname: "md-host"},
|
||||
out: config.CloudConfig{Hostname: "cc-host", ManageEtcHosts: config.EtcHosts("lolz")},
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
out := mergeConfigs(tt.cc, tt.md)
|
||||
if !reflect.DeepEqual(tt.out, out) {
|
||||
t.Errorf("bad config (%d): want %#v, got %#v", i, tt.out, out)
|
||||
}
|
||||
}
|
||||
}
|
27
cover
27
cover
@@ -1,27 +0,0 @@
|
||||
#!/bin/bash -e
|
||||
#
|
||||
# Generate coverage HTML for a package
|
||||
# e.g. PKG=./initialize ./cover
|
||||
#
|
||||
|
||||
if [ -z "$PKG" ]; then
|
||||
echo "cover only works with a single package, sorry"
|
||||
exit 255
|
||||
fi
|
||||
|
||||
COVEROUT="coverage"
|
||||
|
||||
if ! [ -d "$COVEROUT" ]; then
|
||||
mkdir "$COVEROUT"
|
||||
fi
|
||||
|
||||
# strip out slashes and dots
|
||||
COVERPKG=${PKG//\//}
|
||||
COVERPKG=${COVERPKG//./}
|
||||
|
||||
# generate arg for "go test"
|
||||
export COVER="-coverprofile ${COVEROUT}/${COVERPKG}.out"
|
||||
|
||||
source ./test
|
||||
|
||||
go tool cover -html=${COVEROUT}/${COVERPKG}.out
|
@@ -1,102 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package configdrive
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
)
|
||||
|
||||
const (
|
||||
openstackApiVersion = "latest"
|
||||
)
|
||||
|
||||
type configDrive struct {
|
||||
root string
|
||||
readFile func(filename string) ([]byte, error)
|
||||
}
|
||||
|
||||
func NewDatasource(root string) *configDrive {
|
||||
return &configDrive{root, ioutil.ReadFile}
|
||||
}
|
||||
|
||||
func (cd *configDrive) IsAvailable() bool {
|
||||
_, err := os.Stat(cd.root)
|
||||
return !os.IsNotExist(err)
|
||||
}
|
||||
|
||||
func (cd *configDrive) AvailabilityChanges() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (cd *configDrive) ConfigRoot() string {
|
||||
return cd.openstackRoot()
|
||||
}
|
||||
|
||||
func (cd *configDrive) FetchMetadata() (metadata datasource.Metadata, err error) {
|
||||
var data []byte
|
||||
var m struct {
|
||||
SSHAuthorizedKeyMap map[string]string `json:"public_keys"`
|
||||
Hostname string `json:"hostname"`
|
||||
NetworkConfig struct {
|
||||
ContentPath string `json:"content_path"`
|
||||
} `json:"network_config"`
|
||||
}
|
||||
|
||||
if data, err = cd.tryReadFile(path.Join(cd.openstackVersionRoot(), "meta_data.json")); err != nil || len(data) == 0 {
|
||||
return
|
||||
}
|
||||
if err = json.Unmarshal([]byte(data), &m); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
metadata.SSHPublicKeys = m.SSHAuthorizedKeyMap
|
||||
metadata.Hostname = m.Hostname
|
||||
if m.NetworkConfig.ContentPath != "" {
|
||||
metadata.NetworkConfig, err = cd.tryReadFile(path.Join(cd.openstackRoot(), m.NetworkConfig.ContentPath))
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (cd *configDrive) FetchUserdata() ([]byte, error) {
|
||||
return cd.tryReadFile(path.Join(cd.openstackVersionRoot(), "user_data"))
|
||||
}
|
||||
|
||||
func (cd *configDrive) Type() string {
|
||||
return "cloud-drive"
|
||||
}
|
||||
|
||||
func (cd *configDrive) openstackRoot() string {
|
||||
return path.Join(cd.root, "openstack")
|
||||
}
|
||||
|
||||
func (cd *configDrive) openstackVersionRoot() string {
|
||||
return path.Join(cd.openstackRoot(), openstackApiVersion)
|
||||
}
|
||||
|
||||
func (cd *configDrive) tryReadFile(filename string) ([]byte, error) {
|
||||
fmt.Printf("Attempting to read from %q\n", filename)
|
||||
data, err := cd.readFile(filename)
|
||||
if os.IsNotExist(err) {
|
||||
err = nil
|
||||
}
|
||||
return data, err
|
||||
}
|
@@ -1,145 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package configdrive
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/test"
|
||||
)
|
||||
|
||||
func TestFetchMetadata(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
files test.MockFilesystem
|
||||
|
||||
metadata datasource.Metadata
|
||||
}{
|
||||
{
|
||||
root: "/",
|
||||
files: test.NewMockFilesystem(test.File{Path: "/openstack/latest/meta_data.json", Contents: ""}),
|
||||
},
|
||||
{
|
||||
root: "/",
|
||||
files: test.NewMockFilesystem(test.File{Path: "/openstack/latest/meta_data.json", Contents: `{"ignore": "me"}`}),
|
||||
},
|
||||
{
|
||||
root: "/",
|
||||
files: test.NewMockFilesystem(test.File{Path: "/openstack/latest/meta_data.json", Contents: `{"hostname": "host"}`}),
|
||||
metadata: datasource.Metadata{Hostname: "host"},
|
||||
},
|
||||
{
|
||||
root: "/media/configdrive",
|
||||
files: test.NewMockFilesystem(test.File{Path: "/media/configdrive/openstack/latest/meta_data.json", Contents: `{"hostname": "host", "network_config": {"content_path": "config_file.json"}, "public_keys":{"1": "key1", "2": "key2"}}`},
|
||||
test.File{Path: "/media/configdrive/openstack/config_file.json", Contents: "make it work"},
|
||||
),
|
||||
metadata: datasource.Metadata{
|
||||
Hostname: "host",
|
||||
NetworkConfig: []byte("make it work"),
|
||||
SSHPublicKeys: map[string]string{
|
||||
"1": "key1",
|
||||
"2": "key2",
|
||||
},
|
||||
},
|
||||
},
|
||||
} {
|
||||
cd := configDrive{tt.root, tt.files.ReadFile}
|
||||
metadata, err := cd.FetchMetadata()
|
||||
if err != nil {
|
||||
t.Fatalf("bad error for %+v: want %v, got %q", tt, nil, err)
|
||||
}
|
||||
if !reflect.DeepEqual(tt.metadata, metadata) {
|
||||
t.Fatalf("bad metadata for %+v: want %#v, got %#v", tt, tt.metadata, metadata)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFetchUserdata(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
files test.MockFilesystem
|
||||
|
||||
userdata string
|
||||
}{
|
||||
{
|
||||
"/",
|
||||
test.NewMockFilesystem(),
|
||||
"",
|
||||
},
|
||||
{
|
||||
"/",
|
||||
test.NewMockFilesystem(test.File{Path: "/openstack/latest/user_data", Contents: "userdata"}),
|
||||
"userdata",
|
||||
},
|
||||
{
|
||||
"/media/configdrive",
|
||||
test.NewMockFilesystem(test.File{Path: "/media/configdrive/openstack/latest/user_data", Contents: "userdata"}),
|
||||
"userdata",
|
||||
},
|
||||
} {
|
||||
cd := configDrive{tt.root, tt.files.ReadFile}
|
||||
userdata, err := cd.FetchUserdata()
|
||||
if err != nil {
|
||||
t.Fatalf("bad error for %+v: want %v, got %q", tt, nil, err)
|
||||
}
|
||||
if string(userdata) != tt.userdata {
|
||||
t.Fatalf("bad userdata for %+v: want %q, got %q", tt, tt.userdata, userdata)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigRoot(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
configRoot string
|
||||
}{
|
||||
{
|
||||
"/",
|
||||
"/openstack",
|
||||
},
|
||||
{
|
||||
"/media/configdrive",
|
||||
"/media/configdrive/openstack",
|
||||
},
|
||||
} {
|
||||
cd := configDrive{tt.root, nil}
|
||||
if configRoot := cd.ConfigRoot(); configRoot != tt.configRoot {
|
||||
t.Fatalf("bad config root for %q: want %q, got %q", tt, tt.configRoot, configRoot)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewDatasource(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
expectRoot string
|
||||
}{
|
||||
{
|
||||
root: "",
|
||||
expectRoot: "",
|
||||
},
|
||||
{
|
||||
root: "/media/configdrive",
|
||||
expectRoot: "/media/configdrive",
|
||||
},
|
||||
} {
|
||||
service := NewDatasource(tt.root)
|
||||
if service.root != tt.expectRoot {
|
||||
t.Fatalf("bad root (%q): want %q, got %q", tt.root, tt.expectRoot, service.root)
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,38 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package datasource
|
||||
|
||||
import (
|
||||
"net"
|
||||
)
|
||||
|
||||
type Datasource interface {
|
||||
IsAvailable() bool
|
||||
AvailabilityChanges() bool
|
||||
ConfigRoot() string
|
||||
FetchMetadata() (Metadata, error)
|
||||
FetchUserdata() ([]byte, error)
|
||||
Type() string
|
||||
}
|
||||
|
||||
type Metadata struct {
|
||||
PublicIPv4 net.IP
|
||||
PublicIPv6 net.IP
|
||||
PrivateIPv4 net.IP
|
||||
PrivateIPv6 net.IP
|
||||
Hostname string
|
||||
SSHPublicKeys map[string]string
|
||||
NetworkConfig []byte
|
||||
}
|
@@ -1,55 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package file
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
)
|
||||
|
||||
type localFile struct {
|
||||
path string
|
||||
}
|
||||
|
||||
func NewDatasource(path string) *localFile {
|
||||
return &localFile{path}
|
||||
}
|
||||
|
||||
func (f *localFile) IsAvailable() bool {
|
||||
_, err := os.Stat(f.path)
|
||||
return !os.IsNotExist(err)
|
||||
}
|
||||
|
||||
func (f *localFile) AvailabilityChanges() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (f *localFile) ConfigRoot() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (f *localFile) FetchMetadata() (datasource.Metadata, error) {
|
||||
return datasource.Metadata{}, nil
|
||||
}
|
||||
|
||||
func (f *localFile) FetchUserdata() ([]byte, error) {
|
||||
return ioutil.ReadFile(f.path)
|
||||
}
|
||||
|
||||
func (f *localFile) Type() string {
|
||||
return "local-file"
|
||||
}
|
@@ -1,197 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package cloudsigma
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/cloudsigma/cepgo"
|
||||
)
|
||||
|
||||
const (
|
||||
userDataFieldName = "cloudinit-user-data"
|
||||
)
|
||||
|
||||
type serverContextService struct {
|
||||
client interface {
|
||||
All() (interface{}, error)
|
||||
Key(string) (interface{}, error)
|
||||
Meta() (map[string]string, error)
|
||||
FetchRaw(string) ([]byte, error)
|
||||
}
|
||||
}
|
||||
|
||||
func NewServerContextService() *serverContextService {
|
||||
return &serverContextService{
|
||||
client: cepgo.NewCepgo(),
|
||||
}
|
||||
}
|
||||
|
||||
func (_ *serverContextService) IsAvailable() bool {
|
||||
productNameFile, err := os.Open("/sys/class/dmi/id/product_name")
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
productName := make([]byte, 10)
|
||||
_, err = productNameFile.Read(productName)
|
||||
|
||||
return err == nil && string(productName) == "CloudSigma" && hasDHCPLeases()
|
||||
}
|
||||
|
||||
func (_ *serverContextService) AvailabilityChanges() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (_ *serverContextService) ConfigRoot() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (_ *serverContextService) Type() string {
|
||||
return "server-context"
|
||||
}
|
||||
|
||||
func (scs *serverContextService) FetchMetadata() (metadata datasource.Metadata, err error) {
|
||||
var (
|
||||
inputMetadata struct {
|
||||
Name string `json:"name"`
|
||||
UUID string `json:"uuid"`
|
||||
Meta map[string]string `json:"meta"`
|
||||
Nics []struct {
|
||||
Mac string `json:"mac"`
|
||||
IPv4Conf struct {
|
||||
InterfaceType string `json:"interface_type"`
|
||||
IP struct {
|
||||
UUID string `json:"uuid"`
|
||||
} `json:"ip"`
|
||||
} `json:"ip_v4_conf"`
|
||||
VLAN struct {
|
||||
UUID string `json:"uuid"`
|
||||
} `json:"vlan"`
|
||||
} `json:"nics"`
|
||||
}
|
||||
rawMetadata []byte
|
||||
)
|
||||
|
||||
if rawMetadata, err = scs.client.FetchRaw(""); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if err = json.Unmarshal(rawMetadata, &inputMetadata); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if inputMetadata.Name != "" {
|
||||
metadata.Hostname = inputMetadata.Name
|
||||
} else {
|
||||
metadata.Hostname = inputMetadata.UUID
|
||||
}
|
||||
|
||||
metadata.SSHPublicKeys = map[string]string{}
|
||||
// CloudSigma uses an empty string, rather than no string,
|
||||
// to represent the lack of a SSH key
|
||||
if key, _ := inputMetadata.Meta["ssh_public_key"]; len(key) > 0 {
|
||||
splitted := strings.Split(key, " ")
|
||||
metadata.SSHPublicKeys[splitted[len(splitted)-1]] = key
|
||||
}
|
||||
|
||||
for _, nic := range inputMetadata.Nics {
|
||||
if nic.IPv4Conf.IP.UUID != "" {
|
||||
metadata.PublicIPv4 = net.ParseIP(nic.IPv4Conf.IP.UUID)
|
||||
}
|
||||
if nic.VLAN.UUID != "" {
|
||||
if localIP, err := scs.findLocalIP(nic.Mac); err == nil {
|
||||
metadata.PrivateIPv4 = localIP
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (scs *serverContextService) FetchUserdata() ([]byte, error) {
|
||||
metadata, err := scs.client.Meta()
|
||||
if err != nil {
|
||||
return []byte{}, err
|
||||
}
|
||||
|
||||
userData, ok := metadata[userDataFieldName]
|
||||
if ok && isBase64Encoded(userDataFieldName, metadata) {
|
||||
if decodedUserData, err := base64.StdEncoding.DecodeString(userData); err == nil {
|
||||
return decodedUserData, nil
|
||||
} else {
|
||||
return []byte{}, nil
|
||||
}
|
||||
}
|
||||
|
||||
return []byte(userData), nil
|
||||
}
|
||||
|
||||
func (scs *serverContextService) findLocalIP(mac string) (net.IP, error) {
|
||||
ifaces, err := net.Interfaces()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ifaceMac, err := net.ParseMAC(mac)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for _, iface := range ifaces {
|
||||
if !bytes.Equal(iface.HardwareAddr, ifaceMac) {
|
||||
continue
|
||||
}
|
||||
addrs, err := iface.Addrs()
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
for _, addr := range addrs {
|
||||
switch ip := addr.(type) {
|
||||
case *net.IPNet:
|
||||
if ip.IP.To4() != nil {
|
||||
return ip.IP.To4(), nil
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil, errors.New("Local IP not found")
|
||||
}
|
||||
|
||||
func isBase64Encoded(field string, userdata map[string]string) bool {
|
||||
base64Fields, ok := userdata["base64_fields"]
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
|
||||
for _, base64Field := range strings.Split(base64Fields, ",") {
|
||||
if field == base64Field {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func hasDHCPLeases() bool {
|
||||
files, err := ioutil.ReadDir("/run/systemd/netif/leases/")
|
||||
return err == nil && len(files) > 0
|
||||
}
|
@@ -1,200 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package cloudsigma
|
||||
|
||||
import (
|
||||
"net"
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
type fakeCepgoClient struct {
|
||||
raw []byte
|
||||
meta map[string]string
|
||||
keys map[string]interface{}
|
||||
err error
|
||||
}
|
||||
|
||||
func (f *fakeCepgoClient) All() (interface{}, error) {
|
||||
return f.keys, f.err
|
||||
}
|
||||
|
||||
func (f *fakeCepgoClient) Key(key string) (interface{}, error) {
|
||||
return f.keys[key], f.err
|
||||
}
|
||||
|
||||
func (f *fakeCepgoClient) Meta() (map[string]string, error) {
|
||||
return f.meta, f.err
|
||||
}
|
||||
|
||||
func (f *fakeCepgoClient) FetchRaw(key string) ([]byte, error) {
|
||||
return f.raw, f.err
|
||||
}
|
||||
|
||||
func TestServerContextWithEmptyPublicSSHKey(t *testing.T) {
|
||||
client := new(fakeCepgoClient)
|
||||
scs := NewServerContextService()
|
||||
scs.client = client
|
||||
client.raw = []byte(`{
|
||||
"meta": {
|
||||
"base64_fields": "cloudinit-user-data",
|
||||
"cloudinit-user-data": "I2Nsb3VkLWNvbmZpZwoKaG9zdG5hbWU6IGNvcmVvczE=",
|
||||
"ssh_public_key": ""
|
||||
}
|
||||
}`)
|
||||
metadata, err := scs.FetchMetadata()
|
||||
if err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
|
||||
if len(metadata.SSHPublicKeys) != 0 {
|
||||
t.Error("There should be no Public SSH Keys provided")
|
||||
}
|
||||
}
|
||||
|
||||
func TestServerContextFetchMetadata(t *testing.T) {
|
||||
client := new(fakeCepgoClient)
|
||||
scs := NewServerContextService()
|
||||
scs.client = client
|
||||
client.raw = []byte(`{
|
||||
"context": true,
|
||||
"cpu": 4000,
|
||||
"cpu_model": null,
|
||||
"cpus_instead_of_cores": false,
|
||||
"enable_numa": false,
|
||||
"grantees": [],
|
||||
"hv_relaxed": false,
|
||||
"hv_tsc": false,
|
||||
"jobs": [],
|
||||
"mem": 4294967296,
|
||||
"meta": {
|
||||
"base64_fields": "cloudinit-user-data",
|
||||
"cloudinit-user-data": "I2Nsb3VkLWNvbmZpZwoKaG9zdG5hbWU6IGNvcmVvczE=",
|
||||
"ssh_public_key": "ssh-rsa AAAAB3NzaC1yc2E.../hQ5D5 john@doe"
|
||||
},
|
||||
"name": "coreos",
|
||||
"nics": [
|
||||
{
|
||||
"boot_order": null,
|
||||
"ip_v4_conf": {
|
||||
"conf": "dhcp",
|
||||
"ip": {
|
||||
"gateway": "31.171.244.1",
|
||||
"meta": {},
|
||||
"nameservers": [
|
||||
"178.22.66.167",
|
||||
"178.22.71.56",
|
||||
"8.8.8.8"
|
||||
],
|
||||
"netmask": 22,
|
||||
"tags": [],
|
||||
"uuid": "31.171.251.74"
|
||||
}
|
||||
},
|
||||
"ip_v6_conf": null,
|
||||
"mac": "22:3d:09:6b:90:f3",
|
||||
"model": "virtio",
|
||||
"vlan": null
|
||||
},
|
||||
{
|
||||
"boot_order": null,
|
||||
"ip_v4_conf": null,
|
||||
"ip_v6_conf": null,
|
||||
"mac": "22:ae:4a:fb:8f:31",
|
||||
"model": "virtio",
|
||||
"vlan": {
|
||||
"meta": {
|
||||
"description": "",
|
||||
"name": "CoreOS"
|
||||
},
|
||||
"tags": [],
|
||||
"uuid": "5dec030e-25b8-4621-a5a4-a3302c9d9619"
|
||||
}
|
||||
}
|
||||
],
|
||||
"smp": 2,
|
||||
"status": "running",
|
||||
"uuid": "20a0059b-041e-4d0c-bcc6-9b2852de48b3"
|
||||
}`)
|
||||
|
||||
metadata, err := scs.FetchMetadata()
|
||||
if err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
|
||||
if metadata.Hostname != "coreos" {
|
||||
t.Errorf("Hostname is not 'coreos' but %s instead", metadata.Hostname)
|
||||
}
|
||||
|
||||
if metadata.SSHPublicKeys["john@doe"] != "ssh-rsa AAAAB3NzaC1yc2E.../hQ5D5 john@doe" {
|
||||
t.Error("Public SSH Keys are not being read properly")
|
||||
}
|
||||
|
||||
if !metadata.PublicIPv4.Equal(net.ParseIP("31.171.251.74")) {
|
||||
t.Errorf("Public IP is not 31.171.251.74 but %s instead", metadata.PublicIPv4)
|
||||
}
|
||||
}
|
||||
|
||||
func TestServerContextFetchUserdata(t *testing.T) {
|
||||
client := new(fakeCepgoClient)
|
||||
scs := NewServerContextService()
|
||||
scs.client = client
|
||||
userdataSets := []struct {
|
||||
in map[string]string
|
||||
err bool
|
||||
out []byte
|
||||
}{
|
||||
{map[string]string{
|
||||
"base64_fields": "cloudinit-user-data",
|
||||
"cloudinit-user-data": "aG9zdG5hbWU6IGNvcmVvc190ZXN0",
|
||||
}, false, []byte("hostname: coreos_test")},
|
||||
{map[string]string{
|
||||
"cloudinit-user-data": "#cloud-config\\nhostname: coreos1",
|
||||
}, false, []byte("#cloud-config\\nhostname: coreos1")},
|
||||
{map[string]string{}, false, []byte{}},
|
||||
}
|
||||
|
||||
for i, set := range userdataSets {
|
||||
client.meta = set.in
|
||||
got, err := scs.FetchUserdata()
|
||||
if (err != nil) != set.err {
|
||||
t.Errorf("case %d: bad error state (got %t, want %t)", i, err != nil, set.err)
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(got, set.out) {
|
||||
t.Errorf("case %d: got %s, want %s", i, got, set.out)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestServerContextDecodingBase64UserData(t *testing.T) {
|
||||
base64Sets := []struct {
|
||||
in string
|
||||
out bool
|
||||
}{
|
||||
{"cloudinit-user-data,foo,bar", true},
|
||||
{"bar,cloudinit-user-data,foo,bar", true},
|
||||
{"cloudinit-user-data", true},
|
||||
{"", false},
|
||||
{"foo", false},
|
||||
}
|
||||
|
||||
for _, set := range base64Sets {
|
||||
userdata := map[string]string{"base64_fields": set.in}
|
||||
if isBase64Encoded("cloudinit-user-data", userdata) != set.out {
|
||||
t.Errorf("isBase64Encoded(cloudinit-user-data, %s) should be %t", userdata, set.out)
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,110 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package digitalocean
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net"
|
||||
"strconv"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata"
|
||||
)
|
||||
|
||||
const (
|
||||
DefaultAddress = "http://169.254.169.254/"
|
||||
apiVersion = "metadata/v1"
|
||||
userdataUrl = apiVersion + "/user-data"
|
||||
metadataPath = apiVersion + ".json"
|
||||
)
|
||||
|
||||
type Address struct {
|
||||
IPAddress string `json:"ip_address"`
|
||||
Netmask string `json:"netmask"`
|
||||
Cidr int `json:"cidr"`
|
||||
Gateway string `json:"gateway"`
|
||||
}
|
||||
|
||||
type Interface struct {
|
||||
IPv4 *Address `json:"ipv4"`
|
||||
IPv6 *Address `json:"ipv6"`
|
||||
MAC string `json:"mac"`
|
||||
Type string `json:"type"`
|
||||
}
|
||||
|
||||
type Interfaces struct {
|
||||
Public []Interface `json:"public"`
|
||||
Private []Interface `json:"private"`
|
||||
}
|
||||
|
||||
type DNS struct {
|
||||
Nameservers []string `json:"nameservers"`
|
||||
}
|
||||
|
||||
type Metadata struct {
|
||||
Hostname string `json:"hostname"`
|
||||
Interfaces Interfaces `json:"interfaces"`
|
||||
PublicKeys []string `json:"public_keys"`
|
||||
DNS DNS `json:"dns"`
|
||||
}
|
||||
|
||||
type metadataService struct {
|
||||
metadata.MetadataService
|
||||
}
|
||||
|
||||
func NewDatasource(root string) *metadataService {
|
||||
return &metadataService{MetadataService: metadata.NewDatasource(root, apiVersion, userdataUrl, metadataPath)}
|
||||
}
|
||||
|
||||
func (ms *metadataService) FetchMetadata() (metadata datasource.Metadata, err error) {
|
||||
var data []byte
|
||||
var m Metadata
|
||||
|
||||
if data, err = ms.FetchData(ms.MetadataUrl()); err != nil || len(data) == 0 {
|
||||
return
|
||||
}
|
||||
if err = json.Unmarshal(data, &m); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if len(m.Interfaces.Public) > 0 {
|
||||
if m.Interfaces.Public[0].IPv4 != nil {
|
||||
metadata.PublicIPv4 = net.ParseIP(m.Interfaces.Public[0].IPv4.IPAddress)
|
||||
}
|
||||
if m.Interfaces.Public[0].IPv6 != nil {
|
||||
metadata.PublicIPv6 = net.ParseIP(m.Interfaces.Public[0].IPv6.IPAddress)
|
||||
}
|
||||
}
|
||||
if len(m.Interfaces.Private) > 0 {
|
||||
if m.Interfaces.Private[0].IPv4 != nil {
|
||||
metadata.PrivateIPv4 = net.ParseIP(m.Interfaces.Private[0].IPv4.IPAddress)
|
||||
}
|
||||
if m.Interfaces.Private[0].IPv6 != nil {
|
||||
metadata.PrivateIPv6 = net.ParseIP(m.Interfaces.Private[0].IPv6.IPAddress)
|
||||
}
|
||||
}
|
||||
metadata.Hostname = m.Hostname
|
||||
metadata.SSHPublicKeys = map[string]string{}
|
||||
for i, key := range m.PublicKeys {
|
||||
metadata.SSHPublicKeys[strconv.Itoa(i)] = key
|
||||
}
|
||||
metadata.NetworkConfig = data
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (ms metadataService) Type() string {
|
||||
return "digitalocean-metadata-service"
|
||||
}
|
@@ -1,150 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package digitalocean
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata/test"
|
||||
"github.com/coreos/coreos-cloudinit/pkg"
|
||||
)
|
||||
|
||||
func TestType(t *testing.T) {
|
||||
want := "digitalocean-metadata-service"
|
||||
if kind := (metadataService{}).Type(); kind != want {
|
||||
t.Fatalf("bad type: want %q, got %q", want, kind)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFetchMetadata(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
metadataPath string
|
||||
resources map[string]string
|
||||
expect datasource.Metadata
|
||||
clientErr error
|
||||
expectErr error
|
||||
}{
|
||||
{
|
||||
root: "/",
|
||||
metadataPath: "v1.json",
|
||||
resources: map[string]string{
|
||||
"/v1.json": "bad",
|
||||
},
|
||||
expectErr: fmt.Errorf("invalid character 'b' looking for beginning of value"),
|
||||
},
|
||||
{
|
||||
root: "/",
|
||||
metadataPath: "v1.json",
|
||||
resources: map[string]string{
|
||||
"/v1.json": `{
|
||||
"droplet_id": 1,
|
||||
"user_data": "hello",
|
||||
"vendor_data": "hello",
|
||||
"public_keys": [
|
||||
"publickey1",
|
||||
"publickey2"
|
||||
],
|
||||
"region": "nyc2",
|
||||
"interfaces": {
|
||||
"public": [
|
||||
{
|
||||
"ipv4": {
|
||||
"ip_address": "192.168.1.2",
|
||||
"netmask": "255.255.255.0",
|
||||
"gateway": "192.168.1.1"
|
||||
},
|
||||
"ipv6": {
|
||||
"ip_address": "fe00::",
|
||||
"cidr": 126,
|
||||
"gateway": "fe00::"
|
||||
},
|
||||
"mac": "ab:cd:ef:gh:ij",
|
||||
"type": "public"
|
||||
}
|
||||
]
|
||||
}
|
||||
}`,
|
||||
},
|
||||
expect: datasource.Metadata{
|
||||
PublicIPv4: net.ParseIP("192.168.1.2"),
|
||||
PublicIPv6: net.ParseIP("fe00::"),
|
||||
SSHPublicKeys: map[string]string{
|
||||
"0": "publickey1",
|
||||
"1": "publickey2",
|
||||
},
|
||||
NetworkConfig: []byte(`{
|
||||
"droplet_id": 1,
|
||||
"user_data": "hello",
|
||||
"vendor_data": "hello",
|
||||
"public_keys": [
|
||||
"publickey1",
|
||||
"publickey2"
|
||||
],
|
||||
"region": "nyc2",
|
||||
"interfaces": {
|
||||
"public": [
|
||||
{
|
||||
"ipv4": {
|
||||
"ip_address": "192.168.1.2",
|
||||
"netmask": "255.255.255.0",
|
||||
"gateway": "192.168.1.1"
|
||||
},
|
||||
"ipv6": {
|
||||
"ip_address": "fe00::",
|
||||
"cidr": 126,
|
||||
"gateway": "fe00::"
|
||||
},
|
||||
"mac": "ab:cd:ef:gh:ij",
|
||||
"type": "public"
|
||||
}
|
||||
]
|
||||
}
|
||||
}`),
|
||||
},
|
||||
},
|
||||
{
|
||||
clientErr: pkg.ErrTimeout{Err: fmt.Errorf("test error")},
|
||||
expectErr: pkg.ErrTimeout{Err: fmt.Errorf("test error")},
|
||||
},
|
||||
} {
|
||||
service := &metadataService{
|
||||
MetadataService: metadata.MetadataService{
|
||||
Root: tt.root,
|
||||
Client: &test.HttpClient{Resources: tt.resources, Err: tt.clientErr},
|
||||
MetadataPath: tt.metadataPath,
|
||||
},
|
||||
}
|
||||
metadata, err := service.FetchMetadata()
|
||||
if Error(err) != Error(tt.expectErr) {
|
||||
t.Fatalf("bad error (%q): want %q, got %q", tt.resources, tt.expectErr, err)
|
||||
}
|
||||
if !reflect.DeepEqual(tt.expect, metadata) {
|
||||
t.Fatalf("bad fetch (%q): want %#q, got %#q", tt.resources, tt.expect, metadata)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func Error(err error) string {
|
||||
if err != nil {
|
||||
return err.Error()
|
||||
}
|
||||
return ""
|
||||
}
|
@@ -1,114 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package ec2
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"fmt"
|
||||
"net"
|
||||
"strings"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata"
|
||||
"github.com/coreos/coreos-cloudinit/pkg"
|
||||
)
|
||||
|
||||
const (
|
||||
DefaultAddress = "http://169.254.169.254/"
|
||||
apiVersion = "2009-04-04/"
|
||||
userdataPath = apiVersion + "user-data"
|
||||
metadataPath = apiVersion + "meta-data"
|
||||
)
|
||||
|
||||
type metadataService struct {
|
||||
metadata.MetadataService
|
||||
}
|
||||
|
||||
func NewDatasource(root string) *metadataService {
|
||||
return &metadataService{metadata.NewDatasource(root, apiVersion, userdataPath, metadataPath)}
|
||||
}
|
||||
|
||||
func (ms metadataService) FetchMetadata() (datasource.Metadata, error) {
|
||||
metadata := datasource.Metadata{}
|
||||
|
||||
if keynames, err := ms.fetchAttributes(fmt.Sprintf("%s/public-keys", ms.MetadataUrl())); err == nil {
|
||||
keyIDs := make(map[string]string)
|
||||
for _, keyname := range keynames {
|
||||
tokens := strings.SplitN(keyname, "=", 2)
|
||||
if len(tokens) != 2 {
|
||||
return metadata, fmt.Errorf("malformed public key: %q", keyname)
|
||||
}
|
||||
keyIDs[tokens[1]] = tokens[0]
|
||||
}
|
||||
|
||||
metadata.SSHPublicKeys = map[string]string{}
|
||||
for name, id := range keyIDs {
|
||||
sshkey, err := ms.fetchAttribute(fmt.Sprintf("%s/public-keys/%s/openssh-key", ms.MetadataUrl(), id))
|
||||
if err != nil {
|
||||
return metadata, err
|
||||
}
|
||||
metadata.SSHPublicKeys[name] = sshkey
|
||||
fmt.Printf("Found SSH key for %q\n", name)
|
||||
}
|
||||
} else if _, ok := err.(pkg.ErrNotFound); !ok {
|
||||
return metadata, err
|
||||
}
|
||||
|
||||
if hostname, err := ms.fetchAttribute(fmt.Sprintf("%s/hostname", ms.MetadataUrl())); err == nil {
|
||||
metadata.Hostname = strings.Split(hostname, " ")[0]
|
||||
} else if _, ok := err.(pkg.ErrNotFound); !ok {
|
||||
return metadata, err
|
||||
}
|
||||
|
||||
if localAddr, err := ms.fetchAttribute(fmt.Sprintf("%s/local-ipv4", ms.MetadataUrl())); err == nil {
|
||||
metadata.PrivateIPv4 = net.ParseIP(localAddr)
|
||||
} else if _, ok := err.(pkg.ErrNotFound); !ok {
|
||||
return metadata, err
|
||||
}
|
||||
|
||||
if publicAddr, err := ms.fetchAttribute(fmt.Sprintf("%s/public-ipv4", ms.MetadataUrl())); err == nil {
|
||||
metadata.PublicIPv4 = net.ParseIP(publicAddr)
|
||||
} else if _, ok := err.(pkg.ErrNotFound); !ok {
|
||||
return metadata, err
|
||||
}
|
||||
|
||||
return metadata, nil
|
||||
}
|
||||
|
||||
func (ms metadataService) Type() string {
|
||||
return "ec2-metadata-service"
|
||||
}
|
||||
|
||||
func (ms metadataService) fetchAttributes(url string) ([]string, error) {
|
||||
resp, err := ms.FetchData(url)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
scanner := bufio.NewScanner(bytes.NewBuffer(resp))
|
||||
data := make([]string, 0)
|
||||
for scanner.Scan() {
|
||||
data = append(data, scanner.Text())
|
||||
}
|
||||
return data, scanner.Err()
|
||||
}
|
||||
|
||||
func (ms metadataService) fetchAttribute(url string) (string, error) {
|
||||
if attrs, err := ms.fetchAttributes(url); err == nil && len(attrs) > 0 {
|
||||
return attrs[0], nil
|
||||
} else {
|
||||
return "", err
|
||||
}
|
||||
}
|
@@ -1,222 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package ec2
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata/test"
|
||||
"github.com/coreos/coreos-cloudinit/pkg"
|
||||
)
|
||||
|
||||
func TestType(t *testing.T) {
|
||||
want := "ec2-metadata-service"
|
||||
if kind := (metadataService{}).Type(); kind != want {
|
||||
t.Fatalf("bad type: want %q, got %q", want, kind)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFetchAttributes(t *testing.T) {
|
||||
for _, s := range []struct {
|
||||
resources map[string]string
|
||||
err error
|
||||
tests []struct {
|
||||
path string
|
||||
val []string
|
||||
}
|
||||
}{
|
||||
{
|
||||
resources: map[string]string{
|
||||
"/": "a\nb\nc/",
|
||||
"/c/": "d\ne/",
|
||||
"/c/e/": "f",
|
||||
"/a": "1",
|
||||
"/b": "2",
|
||||
"/c/d": "3",
|
||||
"/c/e/f": "4",
|
||||
},
|
||||
tests: []struct {
|
||||
path string
|
||||
val []string
|
||||
}{
|
||||
{"/", []string{"a", "b", "c/"}},
|
||||
{"/b", []string{"2"}},
|
||||
{"/c/d", []string{"3"}},
|
||||
{"/c/e/", []string{"f"}},
|
||||
},
|
||||
},
|
||||
{
|
||||
err: fmt.Errorf("test error"),
|
||||
tests: []struct {
|
||||
path string
|
||||
val []string
|
||||
}{
|
||||
{"", nil},
|
||||
},
|
||||
},
|
||||
} {
|
||||
service := metadataService{metadata.MetadataService{
|
||||
Client: &test.HttpClient{Resources: s.resources, Err: s.err},
|
||||
}}
|
||||
for _, tt := range s.tests {
|
||||
attrs, err := service.fetchAttributes(tt.path)
|
||||
if err != s.err {
|
||||
t.Fatalf("bad error for %q (%q): want %q, got %q", tt.path, s.resources, s.err, err)
|
||||
}
|
||||
if !reflect.DeepEqual(attrs, tt.val) {
|
||||
t.Fatalf("bad fetch for %q (%q): want %q, got %q", tt.path, s.resources, tt.val, attrs)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFetchAttribute(t *testing.T) {
|
||||
for _, s := range []struct {
|
||||
resources map[string]string
|
||||
err error
|
||||
tests []struct {
|
||||
path string
|
||||
val string
|
||||
}
|
||||
}{
|
||||
{
|
||||
resources: map[string]string{
|
||||
"/": "a\nb\nc/",
|
||||
"/c/": "d\ne/",
|
||||
"/c/e/": "f",
|
||||
"/a": "1",
|
||||
"/b": "2",
|
||||
"/c/d": "3",
|
||||
"/c/e/f": "4",
|
||||
},
|
||||
tests: []struct {
|
||||
path string
|
||||
val string
|
||||
}{
|
||||
{"/a", "1"},
|
||||
{"/b", "2"},
|
||||
{"/c/d", "3"},
|
||||
{"/c/e/f", "4"},
|
||||
},
|
||||
},
|
||||
{
|
||||
err: fmt.Errorf("test error"),
|
||||
tests: []struct {
|
||||
path string
|
||||
val string
|
||||
}{
|
||||
{"", ""},
|
||||
},
|
||||
},
|
||||
} {
|
||||
service := metadataService{metadata.MetadataService{
|
||||
Client: &test.HttpClient{Resources: s.resources, Err: s.err},
|
||||
}}
|
||||
for _, tt := range s.tests {
|
||||
attr, err := service.fetchAttribute(tt.path)
|
||||
if err != s.err {
|
||||
t.Fatalf("bad error for %q (%q): want %q, got %q", tt.path, s.resources, s.err, err)
|
||||
}
|
||||
if attr != tt.val {
|
||||
t.Fatalf("bad fetch for %q (%q): want %q, got %q", tt.path, s.resources, tt.val, attr)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFetchMetadata(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
metadataPath string
|
||||
resources map[string]string
|
||||
expect datasource.Metadata
|
||||
clientErr error
|
||||
expectErr error
|
||||
}{
|
||||
{
|
||||
root: "/",
|
||||
metadataPath: "2009-04-04/meta-data",
|
||||
resources: map[string]string{
|
||||
"/2009-04-04/meta-data/public-keys": "bad\n",
|
||||
},
|
||||
expectErr: fmt.Errorf("malformed public key: \"bad\""),
|
||||
},
|
||||
{
|
||||
root: "/",
|
||||
metadataPath: "2009-04-04/meta-data",
|
||||
resources: map[string]string{
|
||||
"/2009-04-04/meta-data/hostname": "host",
|
||||
"/2009-04-04/meta-data/local-ipv4": "1.2.3.4",
|
||||
"/2009-04-04/meta-data/public-ipv4": "5.6.7.8",
|
||||
"/2009-04-04/meta-data/public-keys": "0=test1\n",
|
||||
"/2009-04-04/meta-data/public-keys/0": "openssh-key",
|
||||
"/2009-04-04/meta-data/public-keys/0/openssh-key": "key",
|
||||
},
|
||||
expect: datasource.Metadata{
|
||||
Hostname: "host",
|
||||
PrivateIPv4: net.ParseIP("1.2.3.4"),
|
||||
PublicIPv4: net.ParseIP("5.6.7.8"),
|
||||
SSHPublicKeys: map[string]string{"test1": "key"},
|
||||
},
|
||||
},
|
||||
{
|
||||
root: "/",
|
||||
metadataPath: "2009-04-04/meta-data",
|
||||
resources: map[string]string{
|
||||
"/2009-04-04/meta-data/hostname": "host domain another_domain",
|
||||
"/2009-04-04/meta-data/local-ipv4": "1.2.3.4",
|
||||
"/2009-04-04/meta-data/public-ipv4": "5.6.7.8",
|
||||
"/2009-04-04/meta-data/public-keys": "0=test1\n",
|
||||
"/2009-04-04/meta-data/public-keys/0": "openssh-key",
|
||||
"/2009-04-04/meta-data/public-keys/0/openssh-key": "key",
|
||||
},
|
||||
expect: datasource.Metadata{
|
||||
Hostname: "host",
|
||||
PrivateIPv4: net.ParseIP("1.2.3.4"),
|
||||
PublicIPv4: net.ParseIP("5.6.7.8"),
|
||||
SSHPublicKeys: map[string]string{"test1": "key"},
|
||||
},
|
||||
},
|
||||
{
|
||||
clientErr: pkg.ErrTimeout{Err: fmt.Errorf("test error")},
|
||||
expectErr: pkg.ErrTimeout{Err: fmt.Errorf("test error")},
|
||||
},
|
||||
} {
|
||||
service := &metadataService{metadata.MetadataService{
|
||||
Root: tt.root,
|
||||
Client: &test.HttpClient{Resources: tt.resources, Err: tt.clientErr},
|
||||
MetadataPath: tt.metadataPath,
|
||||
}}
|
||||
metadata, err := service.FetchMetadata()
|
||||
if Error(err) != Error(tt.expectErr) {
|
||||
t.Fatalf("bad error (%q): want %q, got %q", tt.resources, tt.expectErr, err)
|
||||
}
|
||||
if !reflect.DeepEqual(tt.expect, metadata) {
|
||||
t.Fatalf("bad fetch (%q): want %#v, got %#v", tt.resources, tt.expect, metadata)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func Error(err error) string {
|
||||
if err != nil {
|
||||
return err.Error()
|
||||
}
|
||||
return ""
|
||||
}
|
@@ -1,71 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package metadata
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/pkg"
|
||||
)
|
||||
|
||||
type MetadataService struct {
|
||||
Root string
|
||||
Client pkg.Getter
|
||||
ApiVersion string
|
||||
UserdataPath string
|
||||
MetadataPath string
|
||||
}
|
||||
|
||||
func NewDatasource(root, apiVersion, userdataPath, metadataPath string) MetadataService {
|
||||
if !strings.HasSuffix(root, "/") {
|
||||
root += "/"
|
||||
}
|
||||
return MetadataService{root, pkg.NewHttpClient(), apiVersion, userdataPath, metadataPath}
|
||||
}
|
||||
|
||||
func (ms MetadataService) IsAvailable() bool {
|
||||
_, err := ms.Client.Get(ms.Root + ms.ApiVersion)
|
||||
return (err == nil)
|
||||
}
|
||||
|
||||
func (ms MetadataService) AvailabilityChanges() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (ms MetadataService) ConfigRoot() string {
|
||||
return ms.Root
|
||||
}
|
||||
|
||||
func (ms MetadataService) FetchUserdata() ([]byte, error) {
|
||||
return ms.FetchData(ms.UserdataUrl())
|
||||
}
|
||||
|
||||
func (ms MetadataService) FetchData(url string) ([]byte, error) {
|
||||
if data, err := ms.Client.GetRetry(url); err == nil {
|
||||
return data, err
|
||||
} else if _, ok := err.(pkg.ErrNotFound); ok {
|
||||
return []byte{}, nil
|
||||
} else {
|
||||
return data, err
|
||||
}
|
||||
}
|
||||
|
||||
func (ms MetadataService) MetadataUrl() string {
|
||||
return (ms.Root + ms.MetadataPath)
|
||||
}
|
||||
|
||||
func (ms MetadataService) UserdataUrl() string {
|
||||
return (ms.Root + ms.UserdataPath)
|
||||
}
|
@@ -1,185 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package metadata
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource/metadata/test"
|
||||
"github.com/coreos/coreos-cloudinit/pkg"
|
||||
)
|
||||
|
||||
func TestAvailabilityChanges(t *testing.T) {
|
||||
want := true
|
||||
if ac := (MetadataService{}).AvailabilityChanges(); ac != want {
|
||||
t.Fatalf("bad AvailabilityChanges: want %t, got %t", want, ac)
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsAvailable(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
apiVersion string
|
||||
resources map[string]string
|
||||
expect bool
|
||||
}{
|
||||
{
|
||||
root: "/",
|
||||
apiVersion: "2009-04-04",
|
||||
resources: map[string]string{
|
||||
"/2009-04-04": "",
|
||||
},
|
||||
expect: true,
|
||||
},
|
||||
{
|
||||
root: "/",
|
||||
resources: map[string]string{},
|
||||
expect: false,
|
||||
},
|
||||
} {
|
||||
service := &MetadataService{
|
||||
Root: tt.root,
|
||||
Client: &test.HttpClient{Resources: tt.resources, Err: nil},
|
||||
ApiVersion: tt.apiVersion,
|
||||
}
|
||||
if a := service.IsAvailable(); a != tt.expect {
|
||||
t.Fatalf("bad isAvailable (%q): want %t, got %t", tt.resources, tt.expect, a)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFetchUserdata(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
userdataPath string
|
||||
resources map[string]string
|
||||
userdata []byte
|
||||
clientErr error
|
||||
expectErr error
|
||||
}{
|
||||
{
|
||||
root: "/",
|
||||
userdataPath: "2009-04-04/user-data",
|
||||
resources: map[string]string{
|
||||
"/2009-04-04/user-data": "hello",
|
||||
},
|
||||
userdata: []byte("hello"),
|
||||
},
|
||||
{
|
||||
root: "/",
|
||||
clientErr: pkg.ErrNotFound{Err: fmt.Errorf("test not found error")},
|
||||
userdata: []byte{},
|
||||
},
|
||||
{
|
||||
root: "/",
|
||||
clientErr: pkg.ErrTimeout{Err: fmt.Errorf("test timeout error")},
|
||||
expectErr: pkg.ErrTimeout{Err: fmt.Errorf("test timeout error")},
|
||||
},
|
||||
} {
|
||||
service := &MetadataService{
|
||||
Root: tt.root,
|
||||
Client: &test.HttpClient{Resources: tt.resources, Err: tt.clientErr},
|
||||
UserdataPath: tt.userdataPath,
|
||||
}
|
||||
data, err := service.FetchUserdata()
|
||||
if Error(err) != Error(tt.expectErr) {
|
||||
t.Fatalf("bad error (%q): want %q, got %q", tt.resources, tt.expectErr, err)
|
||||
}
|
||||
if !bytes.Equal(data, tt.userdata) {
|
||||
t.Fatalf("bad userdata (%q): want %q, got %q", tt.resources, tt.userdata, data)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestUrls(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
userdataPath string
|
||||
metadataPath string
|
||||
expectRoot string
|
||||
userdata string
|
||||
metadata string
|
||||
}{
|
||||
{
|
||||
root: "/",
|
||||
userdataPath: "2009-04-04/user-data",
|
||||
metadataPath: "2009-04-04/meta-data",
|
||||
expectRoot: "/",
|
||||
userdata: "/2009-04-04/user-data",
|
||||
metadata: "/2009-04-04/meta-data",
|
||||
},
|
||||
{
|
||||
root: "http://169.254.169.254/",
|
||||
userdataPath: "2009-04-04/user-data",
|
||||
metadataPath: "2009-04-04/meta-data",
|
||||
expectRoot: "http://169.254.169.254/",
|
||||
userdata: "http://169.254.169.254/2009-04-04/user-data",
|
||||
metadata: "http://169.254.169.254/2009-04-04/meta-data",
|
||||
},
|
||||
} {
|
||||
service := &MetadataService{
|
||||
Root: tt.root,
|
||||
UserdataPath: tt.userdataPath,
|
||||
MetadataPath: tt.metadataPath,
|
||||
}
|
||||
if url := service.UserdataUrl(); url != tt.userdata {
|
||||
t.Fatalf("bad url (%q): want %q, got %q", tt.root, tt.userdata, url)
|
||||
}
|
||||
if url := service.MetadataUrl(); url != tt.metadata {
|
||||
t.Fatalf("bad url (%q): want %q, got %q", tt.root, tt.metadata, url)
|
||||
}
|
||||
if url := service.ConfigRoot(); url != tt.expectRoot {
|
||||
t.Fatalf("bad url (%q): want %q, got %q", tt.root, tt.expectRoot, url)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewDatasource(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
expectRoot string
|
||||
}{
|
||||
{
|
||||
root: "",
|
||||
expectRoot: "/",
|
||||
},
|
||||
{
|
||||
root: "/",
|
||||
expectRoot: "/",
|
||||
},
|
||||
{
|
||||
root: "http://169.254.169.254",
|
||||
expectRoot: "http://169.254.169.254/",
|
||||
},
|
||||
{
|
||||
root: "http://169.254.169.254/",
|
||||
expectRoot: "http://169.254.169.254/",
|
||||
},
|
||||
} {
|
||||
service := NewDatasource(tt.root, "", "", "")
|
||||
if service.Root != tt.expectRoot {
|
||||
t.Fatalf("bad root (%q): want %q, got %q", tt.root, tt.expectRoot, service.Root)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func Error(err error) string {
|
||||
if err != nil {
|
||||
return err.Error()
|
||||
}
|
||||
return ""
|
||||
}
|
@@ -1,41 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package test
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/pkg"
|
||||
)
|
||||
|
||||
type HttpClient struct {
|
||||
Resources map[string]string
|
||||
Err error
|
||||
}
|
||||
|
||||
func (t *HttpClient) GetRetry(url string) ([]byte, error) {
|
||||
if t.Err != nil {
|
||||
return nil, t.Err
|
||||
}
|
||||
if val, ok := t.Resources[url]; ok {
|
||||
return []byte(val), nil
|
||||
} else {
|
||||
return nil, pkg.ErrNotFound{fmt.Errorf("not found: %q", url)}
|
||||
}
|
||||
}
|
||||
|
||||
func (t *HttpClient) Get(url string) ([]byte, error) {
|
||||
return t.GetRetry(url)
|
||||
}
|
@@ -1,110 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package proc_cmdline
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"strings"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/pkg"
|
||||
)
|
||||
|
||||
const (
|
||||
ProcCmdlineLocation = "/proc/cmdline"
|
||||
ProcCmdlineCloudConfigFlag = "cloud-config-url"
|
||||
)
|
||||
|
||||
type procCmdline struct {
|
||||
Location string
|
||||
}
|
||||
|
||||
func NewDatasource() *procCmdline {
|
||||
return &procCmdline{Location: ProcCmdlineLocation}
|
||||
}
|
||||
|
||||
func (c *procCmdline) IsAvailable() bool {
|
||||
contents, err := ioutil.ReadFile(c.Location)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
cmdline := strings.TrimSpace(string(contents))
|
||||
_, err = findCloudConfigURL(cmdline)
|
||||
return (err == nil)
|
||||
}
|
||||
|
||||
func (c *procCmdline) AvailabilityChanges() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
func (c *procCmdline) ConfigRoot() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (c *procCmdline) FetchMetadata() (datasource.Metadata, error) {
|
||||
return datasource.Metadata{}, nil
|
||||
}
|
||||
|
||||
func (c *procCmdline) FetchUserdata() ([]byte, error) {
|
||||
contents, err := ioutil.ReadFile(c.Location)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
cmdline := strings.TrimSpace(string(contents))
|
||||
url, err := findCloudConfigURL(cmdline)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
client := pkg.NewHttpClient()
|
||||
cfg, err := client.GetRetry(url)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return cfg, nil
|
||||
}
|
||||
|
||||
func (c *procCmdline) Type() string {
|
||||
return "proc-cmdline"
|
||||
}
|
||||
|
||||
func findCloudConfigURL(input string) (url string, err error) {
|
||||
err = errors.New("cloud-config-url not found")
|
||||
for _, token := range strings.Split(input, " ") {
|
||||
parts := strings.SplitN(token, "=", 2)
|
||||
|
||||
key := parts[0]
|
||||
key = strings.Replace(key, "_", "-", -1)
|
||||
|
||||
if key != "cloud-config-url" {
|
||||
continue
|
||||
}
|
||||
|
||||
if len(parts) != 2 {
|
||||
log.Printf("Found cloud-config-url in /proc/cmdline with no value, ignoring.")
|
||||
continue
|
||||
}
|
||||
|
||||
url = parts[1]
|
||||
err = nil
|
||||
}
|
||||
|
||||
return
|
||||
}
|
@@ -1,102 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package proc_cmdline
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"os"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestParseCmdlineCloudConfigFound(t *testing.T) {
|
||||
tests := []struct {
|
||||
input string
|
||||
expect string
|
||||
}{
|
||||
{
|
||||
"cloud-config-url=example.com",
|
||||
"example.com",
|
||||
},
|
||||
{
|
||||
"cloud_config_url=example.com",
|
||||
"example.com",
|
||||
},
|
||||
{
|
||||
"cloud-config-url cloud-config-url=example.com",
|
||||
"example.com",
|
||||
},
|
||||
{
|
||||
"cloud-config-url= cloud-config-url=example.com",
|
||||
"example.com",
|
||||
},
|
||||
{
|
||||
"cloud-config-url=one.example.com cloud-config-url=two.example.com",
|
||||
"two.example.com",
|
||||
},
|
||||
{
|
||||
"foo=bar cloud-config-url=example.com ping=pong",
|
||||
"example.com",
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
output, err := findCloudConfigURL(tt.input)
|
||||
if output != tt.expect {
|
||||
t.Errorf("Test case %d failed: %s != %s", i, output, tt.expect)
|
||||
}
|
||||
if err != nil {
|
||||
t.Errorf("Test case %d produced error: %v", i, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestProcCmdlineAndFetchConfig(t *testing.T) {
|
||||
|
||||
var (
|
||||
ProcCmdlineTmpl = "foo=bar cloud-config-url=%s/config\n"
|
||||
CloudConfigContent = "#cloud-config\n"
|
||||
)
|
||||
|
||||
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
if r.Method == "GET" && r.RequestURI == "/config" {
|
||||
fmt.Fprint(w, CloudConfigContent)
|
||||
}
|
||||
}))
|
||||
defer ts.Close()
|
||||
|
||||
file, err := ioutil.TempFile(os.TempDir(), "test_proc_cmdline")
|
||||
defer os.Remove(file.Name())
|
||||
if err != nil {
|
||||
t.Errorf("Test produced error: %v", err)
|
||||
}
|
||||
_, err = file.Write([]byte(fmt.Sprintf(ProcCmdlineTmpl, ts.URL)))
|
||||
if err != nil {
|
||||
t.Errorf("Test produced error: %v", err)
|
||||
}
|
||||
|
||||
p := NewDatasource()
|
||||
p.Location = file.Name()
|
||||
cfg, err := p.FetchUserdata()
|
||||
if err != nil {
|
||||
t.Errorf("Test produced error: %v", err)
|
||||
}
|
||||
|
||||
if string(cfg) != CloudConfigContent {
|
||||
t.Errorf("Test failed, response body: %s != %s", cfg, CloudConfigContent)
|
||||
}
|
||||
}
|
@@ -1,57 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package test
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path"
|
||||
)
|
||||
|
||||
type MockFilesystem map[string]File
|
||||
|
||||
type File struct {
|
||||
Path string
|
||||
Contents string
|
||||
Directory bool
|
||||
}
|
||||
|
||||
func (m MockFilesystem) ReadFile(filename string) ([]byte, error) {
|
||||
if f, ok := m[path.Clean(filename)]; ok {
|
||||
if f.Directory {
|
||||
return nil, fmt.Errorf("read %s: is a directory", filename)
|
||||
}
|
||||
return []byte(f.Contents), nil
|
||||
}
|
||||
return nil, os.ErrNotExist
|
||||
}
|
||||
|
||||
func NewMockFilesystem(files ...File) MockFilesystem {
|
||||
fs := MockFilesystem{}
|
||||
for _, file := range files {
|
||||
fs[file.Path] = file
|
||||
|
||||
// Create the directories leading up to the file
|
||||
p := path.Dir(file.Path)
|
||||
for p != "/" && p != "." {
|
||||
if f, ok := fs[p]; ok && !f.Directory {
|
||||
panic(fmt.Sprintf("%q already exists and is not a directory (%#v)", p, f))
|
||||
}
|
||||
fs[p] = File{Path: p, Directory: true}
|
||||
p = path.Dir(p)
|
||||
}
|
||||
}
|
||||
return fs
|
||||
}
|
@@ -1,115 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package test
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"os"
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestReadFile(t *testing.T) {
|
||||
tests := []struct {
|
||||
filesystem MockFilesystem
|
||||
|
||||
filename string
|
||||
contents string
|
||||
err error
|
||||
}{
|
||||
{
|
||||
filename: "dne",
|
||||
err: os.ErrNotExist,
|
||||
},
|
||||
{
|
||||
filesystem: MockFilesystem{
|
||||
"exists": File{Contents: "hi"},
|
||||
},
|
||||
filename: "exists",
|
||||
contents: "hi",
|
||||
},
|
||||
{
|
||||
filesystem: MockFilesystem{
|
||||
"dir": File{Directory: true},
|
||||
},
|
||||
filename: "dir",
|
||||
err: errors.New("read dir: is a directory"),
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
contents, err := tt.filesystem.ReadFile(tt.filename)
|
||||
if tt.contents != string(contents) {
|
||||
t.Errorf("bad contents (test %d): want %q, got %q", i, tt.contents, string(contents))
|
||||
}
|
||||
if !reflect.DeepEqual(tt.err, err) {
|
||||
t.Errorf("bad error (test %d): want %v, got %v", i, tt.err, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewMockFilesystem(t *testing.T) {
|
||||
tests := []struct {
|
||||
files []File
|
||||
|
||||
filesystem MockFilesystem
|
||||
}{
|
||||
{
|
||||
filesystem: MockFilesystem{},
|
||||
},
|
||||
{
|
||||
files: []File{File{Path: "file"}},
|
||||
filesystem: MockFilesystem{
|
||||
"file": File{Path: "file"},
|
||||
},
|
||||
},
|
||||
{
|
||||
files: []File{File{Path: "/file"}},
|
||||
filesystem: MockFilesystem{
|
||||
"/file": File{Path: "/file"},
|
||||
},
|
||||
},
|
||||
{
|
||||
files: []File{File{Path: "/dir/file"}},
|
||||
filesystem: MockFilesystem{
|
||||
"/dir": File{Path: "/dir", Directory: true},
|
||||
"/dir/file": File{Path: "/dir/file"},
|
||||
},
|
||||
},
|
||||
{
|
||||
files: []File{File{Path: "/dir/dir/file"}},
|
||||
filesystem: MockFilesystem{
|
||||
"/dir": File{Path: "/dir", Directory: true},
|
||||
"/dir/dir": File{Path: "/dir/dir", Directory: true},
|
||||
"/dir/dir/file": File{Path: "/dir/dir/file"},
|
||||
},
|
||||
},
|
||||
{
|
||||
files: []File{File{Path: "/dir/dir/dir", Directory: true}},
|
||||
filesystem: MockFilesystem{
|
||||
"/dir": File{Path: "/dir", Directory: true},
|
||||
"/dir/dir": File{Path: "/dir/dir", Directory: true},
|
||||
"/dir/dir/dir": File{Path: "/dir/dir/dir", Directory: true},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
filesystem := NewMockFilesystem(tt.files...)
|
||||
if !reflect.DeepEqual(tt.filesystem, filesystem) {
|
||||
t.Errorf("bad filesystem (test %d): want %#v, got %#v", i, tt.filesystem, filesystem)
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,55 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package url
|
||||
|
||||
import (
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/pkg"
|
||||
)
|
||||
|
||||
type remoteFile struct {
|
||||
url string
|
||||
}
|
||||
|
||||
func NewDatasource(url string) *remoteFile {
|
||||
return &remoteFile{url}
|
||||
}
|
||||
|
||||
func (f *remoteFile) IsAvailable() bool {
|
||||
client := pkg.NewHttpClient()
|
||||
_, err := client.Get(f.url)
|
||||
return (err == nil)
|
||||
}
|
||||
|
||||
func (f *remoteFile) AvailabilityChanges() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (f *remoteFile) ConfigRoot() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (f *remoteFile) FetchMetadata() (datasource.Metadata, error) {
|
||||
return datasource.Metadata{}, nil
|
||||
}
|
||||
|
||||
func (f *remoteFile) FetchUserdata() ([]byte, error) {
|
||||
client := pkg.NewHttpClient()
|
||||
return client.GetRetry(f.url)
|
||||
}
|
||||
|
||||
func (f *remoteFile) Type() string {
|
||||
return "url"
|
||||
}
|
@@ -1,117 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package waagent
|
||||
|
||||
import (
|
||||
"encoding/xml"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"os"
|
||||
"path"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
)
|
||||
|
||||
type waagent struct {
|
||||
root string
|
||||
readFile func(filename string) ([]byte, error)
|
||||
}
|
||||
|
||||
func NewDatasource(root string) *waagent {
|
||||
return &waagent{root, ioutil.ReadFile}
|
||||
}
|
||||
|
||||
func (a *waagent) IsAvailable() bool {
|
||||
_, err := os.Stat(path.Join(a.root, "provisioned"))
|
||||
return !os.IsNotExist(err)
|
||||
}
|
||||
|
||||
func (a *waagent) AvailabilityChanges() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (a *waagent) ConfigRoot() string {
|
||||
return a.root
|
||||
}
|
||||
|
||||
func (a *waagent) FetchMetadata() (metadata datasource.Metadata, err error) {
|
||||
var metadataBytes []byte
|
||||
if metadataBytes, err = a.tryReadFile(path.Join(a.root, "SharedConfig.xml")); err != nil {
|
||||
return
|
||||
}
|
||||
if len(metadataBytes) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
type Instance struct {
|
||||
Id string `xml:"id,attr"`
|
||||
Address string `xml:"address,attr"`
|
||||
InputEndpoints struct {
|
||||
Endpoints []struct {
|
||||
LoadBalancedPublicAddress string `xml:"loadBalancedPublicAddress,attr"`
|
||||
} `xml:"Endpoint"`
|
||||
}
|
||||
}
|
||||
|
||||
type SharedConfig struct {
|
||||
Incarnation struct {
|
||||
Instance string `xml:"instance,attr"`
|
||||
}
|
||||
Instances struct {
|
||||
Instances []Instance `xml:"Instance"`
|
||||
}
|
||||
}
|
||||
|
||||
var m SharedConfig
|
||||
if err = xml.Unmarshal(metadataBytes, &m); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
var instance Instance
|
||||
for _, i := range m.Instances.Instances {
|
||||
if i.Id == m.Incarnation.Instance {
|
||||
instance = i
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
metadata.PrivateIPv4 = net.ParseIP(instance.Address)
|
||||
for _, e := range instance.InputEndpoints.Endpoints {
|
||||
host, _, err := net.SplitHostPort(e.LoadBalancedPublicAddress)
|
||||
if err == nil {
|
||||
metadata.PublicIPv4 = net.ParseIP(host)
|
||||
break
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (a *waagent) FetchUserdata() ([]byte, error) {
|
||||
return a.tryReadFile(path.Join(a.root, "CustomData"))
|
||||
}
|
||||
|
||||
func (a *waagent) Type() string {
|
||||
return "waagent"
|
||||
}
|
||||
|
||||
func (a *waagent) tryReadFile(filename string) ([]byte, error) {
|
||||
fmt.Printf("Attempting to read from %q\n", filename)
|
||||
data, err := a.readFile(filename)
|
||||
if os.IsNotExist(err) {
|
||||
err = nil
|
||||
}
|
||||
return data, err
|
||||
}
|
@@ -1,166 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package waagent
|
||||
|
||||
import (
|
||||
"net"
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/datasource/test"
|
||||
)
|
||||
|
||||
func TestFetchMetadata(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
files test.MockFilesystem
|
||||
metadata datasource.Metadata
|
||||
}{
|
||||
{
|
||||
root: "/",
|
||||
files: test.NewMockFilesystem(),
|
||||
},
|
||||
{
|
||||
root: "/",
|
||||
files: test.NewMockFilesystem(test.File{Path: "/SharedConfig.xml", Contents: ""}),
|
||||
},
|
||||
{
|
||||
root: "/var/lib/waagent",
|
||||
files: test.NewMockFilesystem(test.File{Path: "/var/lib/waagent/SharedConfig.xml", Contents: ""}),
|
||||
},
|
||||
{
|
||||
root: "/var/lib/waagent",
|
||||
files: test.NewMockFilesystem(test.File{Path: "/var/lib/waagent/SharedConfig.xml", Contents: `<?xml version="1.0" encoding="utf-8"?>
|
||||
<SharedConfig version="1.0.0.0" goalStateIncarnation="1">
|
||||
<Deployment name="c8f9e4c9c18948e1bebf57c5685da756" guid="{1d10394f-c741-4a1a-a6bb-278f213c5a5e}" incarnation="0" isNonCancellableTopologyChangeEnabled="false">
|
||||
<Service name="core-test-1" guid="{00000000-0000-0000-0000-000000000000}" />
|
||||
<ServiceInstance name="c8f9e4c9c18948e1bebf57c5685da756.0" guid="{1e202e9a-8ffe-4915-b6ef-4118c9628fda}" />
|
||||
</Deployment>
|
||||
<Incarnation number="1" instance="core-test-1" guid="{8767eb4b-b445-4783-b1f5-6c0beaf41ea0}" />
|
||||
<Role guid="{53ecc81e-257f-fbc9-a53a-8cf1a0a122b4}" name="core-test-1" settleTimeSeconds="0" />
|
||||
<LoadBalancerSettings timeoutSeconds="0" waitLoadBalancerProbeCount="8">
|
||||
<Probes>
|
||||
<Probe name="D41D8CD98F00B204E9800998ECF8427E" />
|
||||
<Probe name="C9DEC1518E1158748FA4B6081A8266DD" />
|
||||
</Probes>
|
||||
</LoadBalancerSettings>
|
||||
<OutputEndpoints>
|
||||
<Endpoint name="core-test-1:openInternalEndpoint" type="SFS">
|
||||
<Target instance="core-test-1" endpoint="openInternalEndpoint" />
|
||||
</Endpoint>
|
||||
</OutputEndpoints>
|
||||
<Instances>
|
||||
<Instance id="core-test-1" address="100.73.202.64">
|
||||
<FaultDomains randomId="0" updateId="0" updateCount="0" />
|
||||
<InputEndpoints>
|
||||
<Endpoint name="openInternalEndpoint" address="100.73.202.64" protocol="any" isPublic="false" enableDirectServerReturn="false" isDirectAddress="false" disableStealthMode="false">
|
||||
<LocalPorts>
|
||||
<LocalPortSelfManaged />
|
||||
</LocalPorts>
|
||||
</Endpoint>
|
||||
<Endpoint name="ssh" address="100.73.202.64:22" protocol="tcp" hostName="core-test-1ContractContract" isPublic="true" loadBalancedPublicAddress="191.239.39.77:22" enableDirectServerReturn="false" isDirectAddress="false" disableStealthMode="false">
|
||||
<LocalPorts>
|
||||
<LocalPortRange from="22" to="22" />
|
||||
</LocalPorts>
|
||||
</Endpoint>
|
||||
</InputEndpoints>
|
||||
</Instance>
|
||||
</Instances>
|
||||
</SharedConfig>`}),
|
||||
metadata: datasource.Metadata{
|
||||
PrivateIPv4: net.ParseIP("100.73.202.64"),
|
||||
PublicIPv4: net.ParseIP("191.239.39.77"),
|
||||
},
|
||||
},
|
||||
} {
|
||||
a := waagent{tt.root, tt.files.ReadFile}
|
||||
metadata, err := a.FetchMetadata()
|
||||
if err != nil {
|
||||
t.Fatalf("bad error for %+v: want %v, got %q", tt, nil, err)
|
||||
}
|
||||
if !reflect.DeepEqual(tt.metadata, metadata) {
|
||||
t.Fatalf("bad metadata for %+v: want %#v, got %#v", tt, tt.metadata, metadata)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFetchUserdata(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
files test.MockFilesystem
|
||||
}{
|
||||
{
|
||||
"/",
|
||||
test.NewMockFilesystem(),
|
||||
},
|
||||
{
|
||||
"/",
|
||||
test.NewMockFilesystem(test.File{Path: "/CustomData", Contents: ""}),
|
||||
},
|
||||
{
|
||||
"/var/lib/waagent/",
|
||||
test.NewMockFilesystem(test.File{Path: "/var/lib/waagent/CustomData", Contents: ""}),
|
||||
},
|
||||
} {
|
||||
a := waagent{tt.root, tt.files.ReadFile}
|
||||
_, err := a.FetchUserdata()
|
||||
if err != nil {
|
||||
t.Fatalf("bad error for %+v: want %v, got %q", tt, nil, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigRoot(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
configRoot string
|
||||
}{
|
||||
{
|
||||
"/",
|
||||
"/",
|
||||
},
|
||||
{
|
||||
"/var/lib/waagent",
|
||||
"/var/lib/waagent",
|
||||
},
|
||||
} {
|
||||
a := waagent{tt.root, nil}
|
||||
if configRoot := a.ConfigRoot(); configRoot != tt.configRoot {
|
||||
t.Fatalf("bad config root for %q: want %q, got %q", tt, tt.configRoot, configRoot)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewDatasource(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
root string
|
||||
expectRoot string
|
||||
}{
|
||||
{
|
||||
root: "",
|
||||
expectRoot: "",
|
||||
},
|
||||
{
|
||||
root: "/var/lib/waagent",
|
||||
expectRoot: "/var/lib/waagent",
|
||||
},
|
||||
} {
|
||||
service := NewDatasource(tt.root)
|
||||
if service.root != tt.expectRoot {
|
||||
t.Fatalf("bad root (%q): want %q, got %q", tt.root, tt.expectRoot, service.root)
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,293 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package initialize
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"log"
|
||||
"path"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/config"
|
||||
"github.com/coreos/coreos-cloudinit/network"
|
||||
"github.com/coreos/coreos-cloudinit/system"
|
||||
)
|
||||
|
||||
// CloudConfigFile represents a CoreOS specific configuration option that can generate
|
||||
// an associated system.File to be written to disk
|
||||
type CloudConfigFile interface {
|
||||
// File should either return (*system.File, error), or (nil, nil) if nothing
|
||||
// needs to be done for this configuration option.
|
||||
File() (*system.File, error)
|
||||
}
|
||||
|
||||
// CloudConfigUnit represents a CoreOS specific configuration option that can generate
|
||||
// associated system.Units to be created/enabled appropriately
|
||||
type CloudConfigUnit interface {
|
||||
Units() []system.Unit
|
||||
}
|
||||
|
||||
// Apply renders a CloudConfig to an Environment. This can involve things like
|
||||
// configuring the hostname, adding new users, writing various configuration
|
||||
// files to disk, and manipulating systemd services.
|
||||
func Apply(cfg config.CloudConfig, ifaces []network.InterfaceGenerator, env *Environment) error {
|
||||
if cfg.Hostname != "" {
|
||||
if err := system.SetHostname(cfg.Hostname); err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("Set hostname to %s", cfg.Hostname)
|
||||
}
|
||||
|
||||
for _, user := range cfg.Users {
|
||||
if user.Name == "" {
|
||||
log.Printf("User object has no 'name' field, skipping")
|
||||
continue
|
||||
}
|
||||
|
||||
if system.UserExists(&user) {
|
||||
log.Printf("User '%s' exists, ignoring creation-time fields", user.Name)
|
||||
if user.PasswordHash != "" {
|
||||
log.Printf("Setting '%s' user's password", user.Name)
|
||||
if err := system.SetUserPassword(user.Name, user.PasswordHash); err != nil {
|
||||
log.Printf("Failed setting '%s' user's password: %v", user.Name, err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
} else {
|
||||
log.Printf("Creating user '%s'", user.Name)
|
||||
if err := system.CreateUser(&user); err != nil {
|
||||
log.Printf("Failed creating user '%s': %v", user.Name, err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if len(user.SSHAuthorizedKeys) > 0 {
|
||||
log.Printf("Authorizing %d SSH keys for user '%s'", len(user.SSHAuthorizedKeys), user.Name)
|
||||
if err := system.AuthorizeSSHKeys(user.Name, env.SSHKeyName(), user.SSHAuthorizedKeys); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if user.SSHImportGithubUser != "" {
|
||||
log.Printf("Authorizing github user %s SSH keys for CoreOS user '%s'", user.SSHImportGithubUser, user.Name)
|
||||
if err := SSHImportGithubUser(user.Name, user.SSHImportGithubUser); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
for _, u := range user.SSHImportGithubUsers {
|
||||
log.Printf("Authorizing github user %s SSH keys for CoreOS user '%s'", u, user.Name)
|
||||
if err := SSHImportGithubUser(user.Name, u); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if user.SSHImportURL != "" {
|
||||
log.Printf("Authorizing SSH keys for CoreOS user '%s' from '%s'", user.Name, user.SSHImportURL)
|
||||
if err := SSHImportKeysFromURL(user.Name, user.SSHImportURL); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(cfg.SSHAuthorizedKeys) > 0 {
|
||||
err := system.AuthorizeSSHKeys("core", env.SSHKeyName(), cfg.SSHAuthorizedKeys)
|
||||
if err == nil {
|
||||
log.Printf("Authorized SSH keys for core user")
|
||||
} else {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
var writeFiles []system.File
|
||||
for _, file := range cfg.WriteFiles {
|
||||
writeFiles = append(writeFiles, system.File{File: file})
|
||||
}
|
||||
|
||||
for _, ccf := range []CloudConfigFile{
|
||||
system.OEM{OEM: cfg.CoreOS.OEM},
|
||||
system.Update{Update: cfg.CoreOS.Update, ReadConfig: system.DefaultReadConfig},
|
||||
system.EtcHosts{EtcHosts: cfg.ManageEtcHosts},
|
||||
system.Flannel{Flannel: cfg.CoreOS.Flannel},
|
||||
} {
|
||||
f, err := ccf.File()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if f != nil {
|
||||
writeFiles = append(writeFiles, *f)
|
||||
}
|
||||
}
|
||||
|
||||
var units []system.Unit
|
||||
for _, u := range cfg.CoreOS.Units {
|
||||
units = append(units, system.Unit{Unit: u})
|
||||
}
|
||||
|
||||
for _, ccu := range []CloudConfigUnit{
|
||||
system.Etcd{Etcd: cfg.CoreOS.Etcd},
|
||||
system.Fleet{Fleet: cfg.CoreOS.Fleet},
|
||||
system.Locksmith{Locksmith: cfg.CoreOS.Locksmith},
|
||||
system.Update{Update: cfg.CoreOS.Update, ReadConfig: system.DefaultReadConfig},
|
||||
} {
|
||||
units = append(units, ccu.Units()...)
|
||||
}
|
||||
|
||||
wroteEnvironment := false
|
||||
for _, file := range writeFiles {
|
||||
fullPath, err := system.WriteFile(&file, env.Root())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if path.Clean(file.Path) == "/etc/environment" {
|
||||
wroteEnvironment = true
|
||||
}
|
||||
log.Printf("Wrote file %s to filesystem", fullPath)
|
||||
}
|
||||
|
||||
if !wroteEnvironment {
|
||||
ef := env.DefaultEnvironmentFile()
|
||||
if ef != nil {
|
||||
err := system.WriteEnvFile(ef, env.Root())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("Updated /etc/environment")
|
||||
}
|
||||
}
|
||||
|
||||
if len(ifaces) > 0 {
|
||||
units = append(units, createNetworkingUnits(ifaces)...)
|
||||
if err := system.RestartNetwork(ifaces); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
um := system.NewUnitManager(env.Root())
|
||||
return processUnits(units, env.Root(), um)
|
||||
}
|
||||
|
||||
func createNetworkingUnits(interfaces []network.InterfaceGenerator) (units []system.Unit) {
|
||||
appendNewUnit := func(units []system.Unit, name, content string) []system.Unit {
|
||||
if content == "" {
|
||||
return units
|
||||
}
|
||||
return append(units, system.Unit{Unit: config.Unit{
|
||||
Name: name,
|
||||
Runtime: true,
|
||||
Content: content,
|
||||
}})
|
||||
}
|
||||
for _, i := range interfaces {
|
||||
units = appendNewUnit(units, fmt.Sprintf("%s.netdev", i.Filename()), i.Netdev())
|
||||
units = appendNewUnit(units, fmt.Sprintf("%s.link", i.Filename()), i.Link())
|
||||
units = appendNewUnit(units, fmt.Sprintf("%s.network", i.Filename()), i.Network())
|
||||
}
|
||||
return units
|
||||
}
|
||||
|
||||
// processUnits takes a set of Units and applies them to the given root using
|
||||
// the given UnitManager. This can involve things like writing unit files to
|
||||
// disk, masking/unmasking units, or invoking systemd
|
||||
// commands against units. It returns any error encountered.
|
||||
func processUnits(units []system.Unit, root string, um system.UnitManager) error {
|
||||
type action struct {
|
||||
unit system.Unit
|
||||
command string
|
||||
}
|
||||
actions := make([]action, 0, len(units))
|
||||
reload := false
|
||||
restartNetworkd := false
|
||||
for _, unit := range units {
|
||||
if unit.Name == "" {
|
||||
log.Printf("Skipping unit without name")
|
||||
continue
|
||||
}
|
||||
|
||||
if unit.Content != "" {
|
||||
log.Printf("Writing unit %q to filesystem", unit.Name)
|
||||
if err := um.PlaceUnit(unit); err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("Wrote unit %q", unit.Name)
|
||||
reload = true
|
||||
}
|
||||
|
||||
for _, dropin := range unit.DropIns {
|
||||
if dropin.Name != "" && dropin.Content != "" {
|
||||
log.Printf("Writing drop-in unit %q to filesystem", dropin.Name)
|
||||
if err := um.PlaceUnitDropIn(unit, dropin); err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("Wrote drop-in unit %q", dropin.Name)
|
||||
reload = true
|
||||
}
|
||||
}
|
||||
|
||||
if unit.Mask {
|
||||
log.Printf("Masking unit file %q", unit.Name)
|
||||
if err := um.MaskUnit(unit); err != nil {
|
||||
return err
|
||||
}
|
||||
} else if unit.Runtime {
|
||||
log.Printf("Ensuring runtime unit file %q is unmasked", unit.Name)
|
||||
if err := um.UnmaskUnit(unit); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if unit.Enable {
|
||||
if unit.Group() != "network" {
|
||||
log.Printf("Enabling unit file %q", unit.Name)
|
||||
if err := um.EnableUnitFile(unit); err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("Enabled unit %q", unit.Name)
|
||||
} else {
|
||||
log.Printf("Skipping enable for network-like unit %q", unit.Name)
|
||||
}
|
||||
}
|
||||
|
||||
if unit.Group() == "network" {
|
||||
restartNetworkd = true
|
||||
} else if unit.Command != "" {
|
||||
actions = append(actions, action{unit, unit.Command})
|
||||
}
|
||||
}
|
||||
|
||||
if reload {
|
||||
if err := um.DaemonReload(); err != nil {
|
||||
return errors.New(fmt.Sprintf("failed systemd daemon-reload: %s", err))
|
||||
}
|
||||
}
|
||||
|
||||
if restartNetworkd {
|
||||
log.Printf("Restarting systemd-networkd")
|
||||
networkd := system.Unit{Unit: config.Unit{Name: "systemd-networkd.service"}}
|
||||
res, err := um.RunUnitCommand(networkd, "restart")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("Restarted systemd-networkd (%s)", res)
|
||||
}
|
||||
|
||||
for _, action := range actions {
|
||||
log.Printf("Calling unit command %q on %q'", action.command, action.unit.Name)
|
||||
res, err := um.RunUnitCommand(action.unit, action.command)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("Result of %q on %q: %s", action.command, action.unit.Name, res)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
@@ -1,299 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package initialize
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/config"
|
||||
"github.com/coreos/coreos-cloudinit/network"
|
||||
"github.com/coreos/coreos-cloudinit/system"
|
||||
)
|
||||
|
||||
type TestUnitManager struct {
|
||||
placed []string
|
||||
enabled []string
|
||||
masked []string
|
||||
unmasked []string
|
||||
commands []UnitAction
|
||||
reload bool
|
||||
}
|
||||
|
||||
type UnitAction struct {
|
||||
unit string
|
||||
command string
|
||||
}
|
||||
|
||||
func (tum *TestUnitManager) PlaceUnit(u system.Unit) error {
|
||||
tum.placed = append(tum.placed, u.Name)
|
||||
return nil
|
||||
}
|
||||
func (tum *TestUnitManager) PlaceUnitDropIn(u system.Unit, d config.UnitDropIn) error {
|
||||
tum.placed = append(tum.placed, u.Name+".d/"+d.Name)
|
||||
return nil
|
||||
}
|
||||
func (tum *TestUnitManager) EnableUnitFile(u system.Unit) error {
|
||||
tum.enabled = append(tum.enabled, u.Name)
|
||||
return nil
|
||||
}
|
||||
func (tum *TestUnitManager) RunUnitCommand(u system.Unit, c string) (string, error) {
|
||||
tum.commands = append(tum.commands, UnitAction{u.Name, c})
|
||||
return "", nil
|
||||
}
|
||||
func (tum *TestUnitManager) DaemonReload() error {
|
||||
tum.reload = true
|
||||
return nil
|
||||
}
|
||||
func (tum *TestUnitManager) MaskUnit(u system.Unit) error {
|
||||
tum.masked = append(tum.masked, u.Name)
|
||||
return nil
|
||||
}
|
||||
func (tum *TestUnitManager) UnmaskUnit(u system.Unit) error {
|
||||
tum.unmasked = append(tum.unmasked, u.Name)
|
||||
return nil
|
||||
}
|
||||
|
||||
type mockInterface struct {
|
||||
name string
|
||||
filename string
|
||||
netdev string
|
||||
link string
|
||||
network string
|
||||
kind string
|
||||
modprobeParams string
|
||||
}
|
||||
|
||||
func (i mockInterface) Name() string {
|
||||
return i.name
|
||||
}
|
||||
|
||||
func (i mockInterface) Filename() string {
|
||||
return i.filename
|
||||
}
|
||||
|
||||
func (i mockInterface) Netdev() string {
|
||||
return i.netdev
|
||||
}
|
||||
|
||||
func (i mockInterface) Link() string {
|
||||
return i.link
|
||||
}
|
||||
|
||||
func (i mockInterface) Network() string {
|
||||
return i.network
|
||||
}
|
||||
|
||||
func (i mockInterface) Type() string {
|
||||
return i.kind
|
||||
}
|
||||
|
||||
func (i mockInterface) ModprobeParams() string {
|
||||
return i.modprobeParams
|
||||
}
|
||||
|
||||
func TestCreateNetworkingUnits(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
interfaces []network.InterfaceGenerator
|
||||
expect []system.Unit
|
||||
}{
|
||||
{nil, nil},
|
||||
{
|
||||
[]network.InterfaceGenerator{
|
||||
network.InterfaceGenerator(mockInterface{filename: "test"}),
|
||||
},
|
||||
nil,
|
||||
},
|
||||
{
|
||||
[]network.InterfaceGenerator{
|
||||
network.InterfaceGenerator(mockInterface{filename: "test1", netdev: "test netdev"}),
|
||||
network.InterfaceGenerator(mockInterface{filename: "test2", link: "test link"}),
|
||||
network.InterfaceGenerator(mockInterface{filename: "test3", network: "test network"}),
|
||||
},
|
||||
[]system.Unit{
|
||||
system.Unit{Unit: config.Unit{Name: "test1.netdev", Runtime: true, Content: "test netdev"}},
|
||||
system.Unit{Unit: config.Unit{Name: "test2.link", Runtime: true, Content: "test link"}},
|
||||
system.Unit{Unit: config.Unit{Name: "test3.network", Runtime: true, Content: "test network"}},
|
||||
},
|
||||
},
|
||||
{
|
||||
[]network.InterfaceGenerator{
|
||||
network.InterfaceGenerator(mockInterface{filename: "test", netdev: "test netdev", link: "test link", network: "test network"}),
|
||||
},
|
||||
[]system.Unit{
|
||||
system.Unit{Unit: config.Unit{Name: "test.netdev", Runtime: true, Content: "test netdev"}},
|
||||
system.Unit{Unit: config.Unit{Name: "test.link", Runtime: true, Content: "test link"}},
|
||||
system.Unit{Unit: config.Unit{Name: "test.network", Runtime: true, Content: "test network"}},
|
||||
},
|
||||
},
|
||||
} {
|
||||
units := createNetworkingUnits(tt.interfaces)
|
||||
if !reflect.DeepEqual(tt.expect, units) {
|
||||
t.Errorf("bad units (%+v): want %#v, got %#v", tt.interfaces, tt.expect, units)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestProcessUnits(t *testing.T) {
|
||||
tests := []struct {
|
||||
units []system.Unit
|
||||
|
||||
result TestUnitManager
|
||||
}{
|
||||
{
|
||||
units: []system.Unit{
|
||||
system.Unit{Unit: config.Unit{
|
||||
Name: "foo",
|
||||
Mask: true,
|
||||
}},
|
||||
},
|
||||
result: TestUnitManager{
|
||||
masked: []string{"foo"},
|
||||
},
|
||||
},
|
||||
{
|
||||
units: []system.Unit{
|
||||
system.Unit{Unit: config.Unit{
|
||||
Name: "baz.service",
|
||||
Content: "[Service]\nExecStart=/bin/baz",
|
||||
Command: "start",
|
||||
}},
|
||||
system.Unit{Unit: config.Unit{
|
||||
Name: "foo.network",
|
||||
Content: "[Network]\nFoo=true",
|
||||
}},
|
||||
system.Unit{Unit: config.Unit{
|
||||
Name: "bar.network",
|
||||
Content: "[Network]\nBar=true",
|
||||
}},
|
||||
},
|
||||
result: TestUnitManager{
|
||||
placed: []string{"baz.service", "foo.network", "bar.network"},
|
||||
commands: []UnitAction{
|
||||
UnitAction{"systemd-networkd.service", "restart"},
|
||||
UnitAction{"baz.service", "start"},
|
||||
},
|
||||
reload: true,
|
||||
},
|
||||
},
|
||||
{
|
||||
units: []system.Unit{
|
||||
system.Unit{Unit: config.Unit{
|
||||
Name: "baz.service",
|
||||
Content: "[Service]\nExecStart=/bin/true",
|
||||
}},
|
||||
},
|
||||
result: TestUnitManager{
|
||||
placed: []string{"baz.service"},
|
||||
reload: true,
|
||||
},
|
||||
},
|
||||
{
|
||||
units: []system.Unit{
|
||||
system.Unit{Unit: config.Unit{
|
||||
Name: "locksmithd.service",
|
||||
Runtime: true,
|
||||
}},
|
||||
},
|
||||
result: TestUnitManager{
|
||||
unmasked: []string{"locksmithd.service"},
|
||||
},
|
||||
},
|
||||
{
|
||||
units: []system.Unit{
|
||||
system.Unit{Unit: config.Unit{
|
||||
Name: "woof",
|
||||
Enable: true,
|
||||
}},
|
||||
},
|
||||
result: TestUnitManager{
|
||||
enabled: []string{"woof"},
|
||||
},
|
||||
},
|
||||
{
|
||||
units: []system.Unit{
|
||||
system.Unit{Unit: config.Unit{
|
||||
Name: "hi.service",
|
||||
Runtime: true,
|
||||
Content: "[Service]\nExecStart=/bin/echo hi",
|
||||
DropIns: []config.UnitDropIn{
|
||||
{
|
||||
Name: "lo.conf",
|
||||
Content: "[Service]\nExecStart=/bin/echo lo",
|
||||
},
|
||||
{
|
||||
Name: "bye.conf",
|
||||
Content: "[Service]\nExecStart=/bin/echo bye",
|
||||
},
|
||||
},
|
||||
}},
|
||||
},
|
||||
result: TestUnitManager{
|
||||
placed: []string{"hi.service", "hi.service.d/lo.conf", "hi.service.d/bye.conf"},
|
||||
unmasked: []string{"hi.service"},
|
||||
reload: true,
|
||||
},
|
||||
},
|
||||
{
|
||||
units: []system.Unit{
|
||||
system.Unit{Unit: config.Unit{
|
||||
DropIns: []config.UnitDropIn{
|
||||
{
|
||||
Name: "lo.conf",
|
||||
Content: "[Service]\nExecStart=/bin/echo lo",
|
||||
},
|
||||
},
|
||||
}},
|
||||
},
|
||||
result: TestUnitManager{},
|
||||
},
|
||||
{
|
||||
units: []system.Unit{
|
||||
system.Unit{Unit: config.Unit{
|
||||
Name: "hi.service",
|
||||
DropIns: []config.UnitDropIn{
|
||||
{
|
||||
Content: "[Service]\nExecStart=/bin/echo lo",
|
||||
},
|
||||
},
|
||||
}},
|
||||
},
|
||||
result: TestUnitManager{},
|
||||
},
|
||||
{
|
||||
units: []system.Unit{
|
||||
system.Unit{Unit: config.Unit{
|
||||
Name: "hi.service",
|
||||
DropIns: []config.UnitDropIn{
|
||||
{
|
||||
Name: "lo.conf",
|
||||
},
|
||||
},
|
||||
}},
|
||||
},
|
||||
result: TestUnitManager{},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
tum := &TestUnitManager{}
|
||||
if err := processUnits(tt.units, "", tum); err != nil {
|
||||
t.Errorf("bad error (%+v): want nil, got %s", tt.units, err)
|
||||
}
|
||||
if !reflect.DeepEqual(tt.result, *tum) {
|
||||
t.Errorf("bad result (%+v): want %+v, got %+v", tt.units, tt.result, tum)
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,116 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package initialize
|
||||
|
||||
import (
|
||||
"net"
|
||||
"os"
|
||||
"path"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/config"
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/system"
|
||||
)
|
||||
|
||||
const DefaultSSHKeyName = "coreos-cloudinit"
|
||||
|
||||
type Environment struct {
|
||||
root string
|
||||
configRoot string
|
||||
workspace string
|
||||
sshKeyName string
|
||||
substitutions map[string]string
|
||||
}
|
||||
|
||||
// TODO(jonboulle): this is getting unwieldy, should be able to simplify the interface somehow
|
||||
func NewEnvironment(root, configRoot, workspace, sshKeyName string, metadata datasource.Metadata) *Environment {
|
||||
firstNonNull := func(ip net.IP, env string) string {
|
||||
if ip == nil {
|
||||
return env
|
||||
}
|
||||
return ip.String()
|
||||
}
|
||||
substitutions := map[string]string{
|
||||
"$public_ipv4": firstNonNull(metadata.PublicIPv4, os.Getenv("COREOS_PUBLIC_IPV4")),
|
||||
"$private_ipv4": firstNonNull(metadata.PrivateIPv4, os.Getenv("COREOS_PRIVATE_IPV4")),
|
||||
"$public_ipv6": firstNonNull(metadata.PublicIPv6, os.Getenv("COREOS_PUBLIC_IPV6")),
|
||||
"$private_ipv6": firstNonNull(metadata.PrivateIPv6, os.Getenv("COREOS_PRIVATE_IPV6")),
|
||||
}
|
||||
return &Environment{root, configRoot, workspace, sshKeyName, substitutions}
|
||||
}
|
||||
|
||||
func (e *Environment) Workspace() string {
|
||||
return path.Join(e.root, e.workspace)
|
||||
}
|
||||
|
||||
func (e *Environment) Root() string {
|
||||
return e.root
|
||||
}
|
||||
|
||||
func (e *Environment) ConfigRoot() string {
|
||||
return e.configRoot
|
||||
}
|
||||
|
||||
func (e *Environment) SSHKeyName() string {
|
||||
return e.sshKeyName
|
||||
}
|
||||
|
||||
func (e *Environment) SetSSHKeyName(name string) {
|
||||
e.sshKeyName = name
|
||||
}
|
||||
|
||||
// Apply goes through the map of substitutions and replaces all instances of
|
||||
// the keys with their respective values. It supports escaping substitutions
|
||||
// with a leading '\'.
|
||||
func (e *Environment) Apply(data string) string {
|
||||
for key, val := range e.substitutions {
|
||||
matchKey := strings.Replace(key, `$`, `\$`, -1)
|
||||
replKey := strings.Replace(key, `$`, `$$`, -1)
|
||||
|
||||
// "key" -> "val"
|
||||
data = regexp.MustCompile(`([^\\]|^)`+matchKey).ReplaceAllString(data, `${1}`+val)
|
||||
// "\key" -> "key"
|
||||
data = regexp.MustCompile(`\\`+matchKey).ReplaceAllString(data, replKey)
|
||||
}
|
||||
return data
|
||||
}
|
||||
|
||||
func (e *Environment) DefaultEnvironmentFile() *system.EnvFile {
|
||||
ef := system.EnvFile{
|
||||
File: &system.File{File: config.File{
|
||||
Path: "/etc/environment",
|
||||
}},
|
||||
Vars: map[string]string{},
|
||||
}
|
||||
if ip, ok := e.substitutions["$public_ipv4"]; ok && len(ip) > 0 {
|
||||
ef.Vars["COREOS_PUBLIC_IPV4"] = ip
|
||||
}
|
||||
if ip, ok := e.substitutions["$private_ipv4"]; ok && len(ip) > 0 {
|
||||
ef.Vars["COREOS_PRIVATE_IPV4"] = ip
|
||||
}
|
||||
if ip, ok := e.substitutions["$public_ipv6"]; ok && len(ip) > 0 {
|
||||
ef.Vars["COREOS_PUBLIC_IPV6"] = ip
|
||||
}
|
||||
if ip, ok := e.substitutions["$private_ipv6"]; ok && len(ip) > 0 {
|
||||
ef.Vars["COREOS_PRIVATE_IPV6"] = ip
|
||||
}
|
||||
if len(ef.Vars) == 0 {
|
||||
return nil
|
||||
} else {
|
||||
return &ef
|
||||
}
|
||||
}
|
@@ -1,148 +0,0 @@
|
||||
// Copyright 2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package initialize
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"os"
|
||||
"path"
|
||||
"testing"
|
||||
|
||||
"github.com/coreos/coreos-cloudinit/datasource"
|
||||
"github.com/coreos/coreos-cloudinit/system"
|
||||
)
|
||||
|
||||
func TestEnvironmentApply(t *testing.T) {
|
||||
os.Setenv("COREOS_PUBLIC_IPV4", "1.2.3.4")
|
||||
os.Setenv("COREOS_PRIVATE_IPV4", "5.6.7.8")
|
||||
os.Setenv("COREOS_PUBLIC_IPV6", "1234::")
|
||||
os.Setenv("COREOS_PRIVATE_IPV6", "5678::")
|
||||
for _, tt := range []struct {
|
||||
metadata datasource.Metadata
|
||||
input string
|
||||
out string
|
||||
}{
|
||||
{
|
||||
// Substituting both values directly should always take precedence
|
||||
// over environment variables
|
||||
datasource.Metadata{
|
||||
PublicIPv4: net.ParseIP("192.0.2.3"),
|
||||
PrivateIPv4: net.ParseIP("192.0.2.203"),
|
||||
PublicIPv6: net.ParseIP("fe00:1234::"),
|
||||
PrivateIPv6: net.ParseIP("fe00:5678::"),
|
||||
},
|
||||
`[Service]
|
||||
ExecStart=/usr/bin/echo "$public_ipv4 $public_ipv6"
|
||||
ExecStop=/usr/bin/echo $private_ipv4 $private_ipv6
|
||||
ExecStop=/usr/bin/echo $unknown`,
|
||||
`[Service]
|
||||
ExecStart=/usr/bin/echo "192.0.2.3 fe00:1234::"
|
||||
ExecStop=/usr/bin/echo 192.0.2.203 fe00:5678::
|
||||
ExecStop=/usr/bin/echo $unknown`,
|
||||
},
|
||||
{
|
||||
// Substituting one value directly while falling back with the other
|
||||
datasource.Metadata{
|
||||
PrivateIPv4: net.ParseIP("127.0.0.1"),
|
||||
},
|
||||
"$private_ipv4\n$public_ipv4",
|
||||
"127.0.0.1\n1.2.3.4",
|
||||
},
|
||||
{
|
||||
// Falling back to environment variables for both values
|
||||
datasource.Metadata{},
|
||||
"$private_ipv4\n$public_ipv4",
|
||||
"5.6.7.8\n1.2.3.4",
|
||||
},
|
||||
{
|
||||
// No substitutions
|
||||
datasource.Metadata{},
|
||||
"$private_ipv4\nfoobar",
|
||||
"5.6.7.8\nfoobar",
|
||||
},
|
||||
{
|
||||
// Escaping substitutions
|
||||
datasource.Metadata{
|
||||
PrivateIPv4: net.ParseIP("127.0.0.1"),
|
||||
},
|
||||
`\$private_ipv4
|
||||
$private_ipv4
|
||||
addr: \$private_ipv4
|
||||
\\$private_ipv4`,
|
||||
`$private_ipv4
|
||||
127.0.0.1
|
||||
addr: $private_ipv4
|
||||
\$private_ipv4`,
|
||||
},
|
||||
{
|
||||
// No substitutions with escaping
|
||||
datasource.Metadata{},
|
||||
"\\$test\n$test",
|
||||
"\\$test\n$test",
|
||||
},
|
||||
} {
|
||||
|
||||
env := NewEnvironment("./", "./", "./", "", tt.metadata)
|
||||
got := env.Apply(tt.input)
|
||||
if got != tt.out {
|
||||
t.Fatalf("Environment incorrectly applied.\ngot:\n%s\nwant:\n%s", got, tt.out)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestEnvironmentFile(t *testing.T) {
|
||||
metadata := datasource.Metadata{
|
||||
PublicIPv4: net.ParseIP("1.2.3.4"),
|
||||
PrivateIPv4: net.ParseIP("5.6.7.8"),
|
||||
PublicIPv6: net.ParseIP("1234::"),
|
||||
PrivateIPv6: net.ParseIP("5678::"),
|
||||
}
|
||||
expect := "COREOS_PRIVATE_IPV4=5.6.7.8\nCOREOS_PRIVATE_IPV6=5678::\nCOREOS_PUBLIC_IPV4=1.2.3.4\nCOREOS_PUBLIC_IPV6=1234::\n"
|
||||
|
||||
dir, err := ioutil.TempDir(os.TempDir(), "coreos-cloudinit-")
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to create tempdir: %v", err)
|
||||
}
|
||||
defer os.RemoveAll(dir)
|
||||
|
||||
env := NewEnvironment("./", "./", "./", "", metadata)
|
||||
ef := env.DefaultEnvironmentFile()
|
||||
err = system.WriteEnvFile(ef, dir)
|
||||
if err != nil {
|
||||
t.Fatalf("WriteEnvFile failed: %v", err)
|
||||
}
|
||||
|
||||
fullPath := path.Join(dir, "etc", "environment")
|
||||
contents, err := ioutil.ReadFile(fullPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Unable to read expected file: %v", err)
|
||||
}
|
||||
|
||||
if string(contents) != expect {
|
||||
t.Fatalf("File has incorrect contents: %q", contents)
|
||||
}
|
||||
}
|
||||
|
||||
func TestEnvironmentFileNil(t *testing.T) {
|
||||
os.Clearenv()
|
||||
metadata := datasource.Metadata{}
|
||||
|
||||
env := NewEnvironment("./", "./", "./", "", metadata)
|
||||
ef := env.DefaultEnvironmentFile()
|
||||
if ef != nil {
|
||||
t.Fatalf("Environment file not nil: %v", ef)
|
||||
}
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user