Compare commits

..

110 Commits

Author SHA1 Message Date
Alex Crawford
7342d91a85 coreos-cloudinit: bump to 0.10.1 2014-09-12 16:47:58 -07:00
Alex Crawford
db1bc51c98 Merge pull request #231 from crawford/netconf
metadata: don't fail if no network config was provided
2014-09-12 16:35:24 -07:00
Alex Crawford
c1f373e648 metadata: don't fail if no network config was provided 2014-09-12 16:29:27 -07:00
Alex Crawford
db49a16002 coreos-cloudinit: bump to 0.10.0+git 2014-09-11 17:37:05 -07:00
Alex Crawford
a4a6c281d9 coreos-cloudinit: bump to 0.10.0 2014-09-11 17:36:38 -07:00
Alex Crawford
17f8733121 Merge pull request #228 from crawford/sub
env: add support for escaping environment substitutions
2014-09-11 15:34:03 -07:00
Alex Crawford
7dec922618 env: add support for escaping environment substitutions 2014-09-11 15:30:33 -07:00
Alex Crawford
54d3ae27af Merge pull request #226 from crawford/oem
flags: add oem flag
2014-09-11 13:25:21 -07:00
Alex Crawford
ee2416af64 flags: move the flags into their own namespace 2014-09-11 12:00:17 -07:00
Alex Crawford
cda037f9a5 flags: add oem flag
The oem flag will allow each of the OEMs to specify one flag only, acting as a
shortcut to their specific configuration. This will allow us to update which
options each OEM uses when running cloudinit.
2014-09-11 12:00:17 -07:00
Alex Crawford
549806cf64 Merge pull request #227 from crawford/ipv6
metadata: add support for IPv6 variable substitution
2014-09-11 10:45:33 -07:00
Alex Crawford
56815a6756 metadata: add support for IPv6 variable substitution 2014-09-11 10:43:02 -07:00
Alex Crawford
24a6f7c49c Merge pull request #225 from crawford/exit
userdata: change handling of bad userdata
2014-09-10 19:16:12 -07:00
Alex Crawford
98484be434 userdata: change handling of bad userdata
Don't fail after encountering bad userdata. Continue processing the metadata
and then exit. This will allow people with bad userdata to actually log in and
see the error.
2014-09-10 17:50:23 -07:00
Jonathan Boulle
9024659296 Merge pull request #217 from ecnahc515/patch-1
Fix broken link to fleet config
2014-09-09 15:19:18 -07:00
Chance Zibolski
fc6940f7ba Documentation: More specific link to fleet config.
Add an anchor tag to the url to take person directly to config section.
2014-09-09 15:15:55 -07:00
Brian Waldon
f2fd95699b Merge pull request #224 from bcwaldon/typo
docs: fix a typo
2014-09-09 12:36:42 -07:00
bdevloed
65db96cc7c docs: fix a typo 2014-09-09 12:31:54 -07:00
Alex Crawford
c17b93b5c0 Merge pull request #223 from crawford/yaml
third_party: sync third_party/gopkg.in/yaml.v1
2014-09-08 19:28:59 -07:00
Alex Crawford
d352f8ce6a Merge pull request #222 from crawford/contribute
docs: Update maintainers and contribution guide
2014-09-08 15:54:30 -07:00
Alex Crawford
78aa2c56ec yaml: replace goyaml with yaml 2014-09-08 13:25:27 -07:00
Alex Crawford
c5b3788282 third_party: sync third_party/gopkg.in/yaml.v1
Update launchpad.net/goyaml to gopkg.in/yaml.v1
2014-09-08 13:23:50 -07:00
Alex Crawford
5e98970bb5 docs: Update maintainers and contribution guide 2014-09-08 12:55:17 -07:00
Alex Crawford
cbdd446c55 Merge pull request #220 from crawford/docs
docs: Update list of platforms supporting variable substitutions
2014-09-05 11:51:45 -07:00
Alex Crawford
316cadcf44 docs: Update list of platforms supporting variable substitutions 2014-09-04 12:57:19 -07:00
Alex Crawford
5a939be21b coreos-cloudinit: bump to 0.9.6+git 2014-09-02 17:49:09 -07:00
Alex Crawford
8d76c64386 coreos-cloudinit: bump to 0.9.6 2014-09-02 17:48:45 -07:00
Alex Crawford
1b854eb51e Merge pull request #218 from crawford/units
units: Ensure that the units are executed in order
2014-09-02 17:40:37 -07:00
Alex Crawford
9fcf338bf3 units: Ensure that the units are executed in order 2014-09-02 17:15:32 -07:00
Alex Crawford
fda72bdb5c coreos-cloudinit: bump to 0.9.5+git 2014-09-02 10:10:59 -07:00
Alex Crawford
685a38c6c8 coreos-cloudinit: bump to 0.9.5 2014-09-02 10:10:41 -07:00
Alex Crawford
9d15f2cfaf Merge pull request #213 from crawford/digitalocean
digitalocean: Add support for DigitalOcean
2014-09-01 16:55:12 -07:00
Alex Crawford
2134fce791 digitalocean: Add tests for network unit generation 2014-09-01 16:53:15 -07:00
Alex Crawford
3abd6b2225 digitalocean: Add DigitalOcean metadata service
Move debian-related processing into its own file.
2014-09-01 16:53:15 -07:00
Alex Crawford
2a8e6c9566 network: Fall back to MAC address if there is no name 2014-09-01 09:29:45 -07:00
Alex Crawford
abe43537da metadata: Merge the network config 2014-09-01 09:29:45 -07:00
Jonathan Boulle
3a550af651 Merge pull request #216 from robszumski/patch-2
docs: fix broken link to fleet docs
2014-08-29 11:22:13 -07:00
Rob Szumski
61c3a0eb2d docs: fix broken link to fleet docs 2014-08-29 11:17:05 -07:00
Brian Waldon
480176bc11 Merge pull request #214 from bcwaldon/clarify-write-files
doc: clarify docs around write_files
2014-08-28 20:24:11 -07:00
Brian Waldon
01b18eb551 squash: fix spacing 2014-08-28 13:48:58 -07:00
Brian Waldon
970ef435b6 doc: clarify docs around write_files 2014-08-28 13:33:59 -07:00
Alex Crawford
e8d0021140 Merge pull request #212 from crawford/metadata
refactor: Refactor metadata and datasources to be more testable
2014-08-26 18:45:10 -07:00
Alex Crawford
e9ec78ac6f test: Refactor interface tests a little 2014-08-26 13:18:46 -07:00
Alex Crawford
4a2e417781 network: Add support for multiple addresses and HW addresses
Logical Interfaces can be assigned a hardware address allowing them
to match on MAC address. The static config method also needs to
support specifying multiple addresses.
2014-08-26 13:07:38 -07:00
Alex Crawford
604ef7ecb4 datasource: Add FetchNetworkConfig
FetchNetworkConfig is currently only used by ConfigDrive to read the
network config file from the disk.
2014-08-26 13:04:43 -07:00
Alex Crawford
c39dd5cc67 networkd: Fix bug causing bonding to always be loaded 2014-08-26 13:04:21 -07:00
Alex Crawford
a923161f4a metadata: Refactor common parts out of ec2 2014-08-26 12:02:56 -07:00
Alex Crawford
e59e2f6cd5 Merge pull request #210 from crawford/test
test: Add gofmt to test
2014-08-25 17:04:04 -07:00
Alex Crawford
e90fe3eba8 test: Add gofmt to test 2014-08-25 12:48:52 -07:00
Alex Crawford
fb0187b197 gofmt: sort 2014-08-25 12:35:40 -07:00
Michael Marineau
6babe74716 Merge pull request #209 from marineam/go13
travis: enable testing under go 1.3
2014-08-25 12:26:23 -07:00
Michael Marineau
b1e88284ca travis: enable testing under go 1.3 2014-08-25 12:21:07 -07:00
Alex Crawford
18a65f7dac Merge pull request #208 from crawford/go
test: Fix tests for Go 1.3
2014-08-25 12:19:52 -07:00
Alex Crawford
0c212c72c9 test: Fix tests for Go 1.3 2014-08-25 12:01:27 -07:00
Alex Crawford
6a800d8cc0 coreos-cloudinit: bump to 0.9.4+git 2014-08-24 18:41:20 -07:00
Alex Crawford
5e112147bb coreos-cloudinit: bump to 0.9.4 2014-08-24 18:40:53 -07:00
Alex Crawford
7e78b1563f Merge pull request #206 from crawford/tests
test: Enable tests for CloudSigma datasource
2014-08-24 18:36:38 -07:00
Alex Crawford
ecbe81f103 test: Enable tests for CloudSigma datasource 2014-08-24 17:08:49 -07:00
Alex Crawford
45c20c1dd3 Merge pull request #196 from Vladimiroff/cloudsigma
cloudsigma: Add support for CloudSigma datasource
2014-08-15 15:21:33 -07:00
Alex Crawford
8ce925a060 coreos-cloudinit: bump to 0.9.3+git 2014-08-15 10:47:28 -07:00
Alex Crawford
eadb6ef42c coreos-cloudinit: bump to 0.9.3 2014-08-15 10:46:46 -07:00
Alex Crawford
7518f0ec93 Merge pull request #204 from crawford/configdrive
configdrive: Remove broken support for ec2 metadata
2014-08-15 10:43:26 -07:00
Alex Crawford
f0b9eaf2fe configdrive: Remove broken support for ec2 metadata
As it turns out, certain metadata is only present in the ec2 flavor
of metadata (e.g. public_ipv4) and other data is only present in
the openstack flavor (e.g. network_config). For now, just read the
openstack metadata.
2014-08-15 10:35:21 -07:00
Kiril Vladimirov
7320a2cbf2 feat(datasource/metadata): Add datasource for CloudSigma 2014-08-15 12:08:55 +03:00
Kiril Vladimirov
57950b3ed9 add(goserial): import github.com/tarm/goserial 2014-08-15 12:08:34 +03:00
Kiril Vladimirov
85c6a2a16a add(cepgo): import github.com/cloudsigma/cepgo 2014-08-15 12:07:58 +03:00
Jonathan Boulle
24b44e86a6 coreos-cloudinit: bump to 0.9.2+git 2014-08-12 11:38:51 -07:00
Jonathan Boulle
2f52ad4ef8 coreos-cloudinit: bump to 0.9.2 2014-08-12 11:38:12 -07:00
Jonathan Boulle
735d6c6161 Merge pull request #202 from jonboulle/env
environment: write new keys in consistent order
2014-08-11 22:40:42 -07:00
Alex Crawford
1cf275bad6 Merge pull request #201 from crawford/configdrive
configdrive: fix root path
2014-08-11 20:11:17 -07:00
Jonathan Boulle
f1c97cb4d5 environment: write new keys in consistent order 2014-08-11 18:24:58 -07:00
Alex Crawford
d143904aa9 configdrive: fix root path 2014-08-11 17:57:10 -07:00
Jonathan Boulle
c428ce2cc5 Merge pull request #200 from jonboulle/fu
initialize: use correct heuristic to check if etcdenvironment is set
2014-08-11 17:44:44 -07:00
Jonathan Boulle
dfb5b4fc3a initialize: use correct heuristic to check if etcdenvironment is set
In some circumstances (e.g. nova-agent-watcher) cloudconfig files will
be created where the EtcdEnvironment is an empty map, and hence != nil.
If this is the case we should not do anything at all (because the user
hasn't explicitly asked us to configure etcd). This change standardises
behaviour with the check that we already do for FleetEnvironment.
2014-08-11 16:01:08 -07:00
Alex Crawford
97d5538533 Merge pull request #197 from crawford/ec2
datasource: Fix ec2 URLs
2014-08-06 22:45:03 -07:00
Alex Crawford
6b8f82b5d3 datasource: Fix ec2 URLs
_ vs -
2014-08-06 21:31:43 -07:00
Alex Crawford
facde6609f Merge pull request #194 from crawford/metadata
datasource: Refactoring datasources
2014-08-06 15:55:13 -07:00
Alex Crawford
d68ae84b37 metadata: Refactor metadata service into ec2 metadata
Added more testing.
2014-08-05 17:19:43 -07:00
Alex Crawford
54aa39543b timeouts: Use After() instead of Tick() 2014-08-04 15:10:14 -07:00
Alex Crawford
8566a2c118 datasource: Move datasources into their own packages. 2014-08-04 15:10:07 -07:00
Alex Crawford
49ac083af5 coreos-cloudinit: bump to 0.9.1+git 2014-08-04 14:14:24 -07:00
Alex Crawford
5d65ca230a coreos-cloudinit: bump to 0.9.1 2014-08-04 14:13:51 -07:00
Alex Crawford
38b3e1213a Merge pull request #188 from crawford/configdrive
configdrive: Use the EC2 metadata over OpenStack
2014-08-04 11:12:06 -07:00
Alex Crawford
4eedca26e9 configdrive: Use the EC2 metadata over OpenStack
Standardize on specific EC2 and OpenStack versions and add tests.
2014-08-04 10:18:29 -07:00
Brian Waldon
f2b342c8be doc: escape user.home example 2014-08-01 13:20:44 -07:00
Michael Marineau
c19d8f6b61 Merge pull request #193 from benjic/cloudconfig_variables
docs(quick-start): Clarified use of fields in cloud config
2014-07-24 11:02:03 -07:00
Benjamin Campbell
7913f74351 docs(quick-start) Enumerated supported platforms
Following suggestion a list of platforms that *do* support cloud config variables. In addition minor mark up formatting is added.
2014-07-24 11:54:31 -06:00
Benjamin Campbell
5593408be8 docs(quick-start): Clarified use of fields in cloud config
Updated the language to illustrate that fields in a cloud config is not
supported in all environments. This is expressed explicitly in PXE and
install to disk pages. The quick start lacked this information and is
inconsistent.
2014-07-24 11:27:35 -06:00
Alex Crawford
7fc67c2acf Merge pull request #191 from crawford/panic
config: Verify that type assertions are valid
2014-07-22 11:51:39 -07:00
Alex Crawford
b093094292 config: Verify that type assertions are valid 2014-07-22 11:39:20 -07:00
Michael Marineau
9a80fd714a Merge pull request #181 from robszumski/docs-startup
fix(docs): clarity around boot behavior and unit usage
2014-07-21 22:12:19 -07:00
Rob Szumski
fef5473881 fix(docs): clarity around boot behavior and unit usage 2014-07-21 21:41:00 -07:00
Alex Crawford
bf5a2b208f coreos-cloudinit: bump to 0.9.0+git 2014-07-21 19:17:14 -07:00
Alex Crawford
364507fb75 coreos-cloudinit: bump to 0.9.0 2014-07-21 19:16:11 -07:00
Alex Crawford
08d4842502 Merge pull request #190 from crawford/logs
Logs
2014-07-21 12:22:41 -07:00
Alex Crawford
21e32e44f8 system: Add more logging for networkd 2014-07-21 11:25:22 -07:00
Alex Crawford
7a06dee16f system: Cleanup redundant code 2014-07-21 11:24:42 -07:00
Alex Crawford
ff9cf5743d Merge pull request #187 from crawford/order
networkd: Reverse lexicographic order of generated unit files
2014-07-18 13:23:58 -07:00
Alex Crawford
1b10a3a187 networkd: Reverse lexicographic order of generated unit files 2014-07-17 20:47:37 -07:00
Michael Marineau
10838e001d Merge pull request #186 from robszumski/add-highlighting
feat(docs): add syntax highlighting
2014-07-15 15:26:33 -07:00
Rob Szumski
96370ac5b9 feat(docs): add syntax highlighting 2014-07-14 16:16:14 -07:00
Michael Marineau
0b82cd074d Merge pull request #180 from marineam/systemd_testing
chore(*): split out unit processing from config.Apply
2014-07-11 20:09:08 -07:00
Alex Crawford
a974e85103 Merge pull request #174 from crawford/teeth
networkd: Fix issues with bonding and VLANs
2014-07-11 15:48:02 -07:00
Michael Marineau
f0450662b0 Merge pull request #183 from marineam/fix
tests: fix error messages, use Fatalf
2014-07-11 15:40:54 -07:00
Michael Marineau
03e29d1291 tests: fix error messages, use Fatalf 2014-07-11 15:38:04 -07:00
Michael Marineau
98ae5d88aa coreos-cloudinit: bump to 0.8.9+git 2014-07-11 14:40:57 -07:00
Jonathan Boulle
be51f4eba0 chore(*): split out unit processing from config.Apply 2014-07-11 10:44:19 -07:00
Alex Crawford
e3037f18a6 networkd: Restart networkd twice to work around race
https://bugs.freedesktop.org/show_bug.cgi?id=76077
2014-07-10 23:40:42 -07:00
Alex Crawford
fe388a3ab6 networkd: Create config directory before writing config 2014-07-10 23:40:42 -07:00
Alex Crawford
c820f2b1cf bonding: Add support for probing the bonding module with parameters
Until support for bonding params is added to networkd, this will be
neccessary in order to use bonding parameters (i.e. miimon, mode).
This also makes it such that the 8012q module will only be loaded if
the network config makes use of VLANs.
2014-07-10 23:40:42 -07:00
86 changed files with 4486 additions and 1149 deletions

View File

@@ -1,5 +1,7 @@
language: go language: go
go: 1.2 go:
- 1.3
- 1.2
install: install:
- go get code.google.com/p/go.tools/cmd/cover - go get code.google.com/p/go.tools/cmd/cover

View File

@@ -39,22 +39,25 @@ Thanks for your contributions!
### Format of the Commit Message ### Format of the Commit Message
We follow a rough convention for commit messages borrowed from AngularJS. This We follow a rough convention for commit messages that is designed to answer two
is an example of a commit: questions: what changed and why. The subject line should feature the what and
the body of the commit should describe the why.
``` ```
feat(scripts/test-cluster): add a cluster test command environment: write new keys in consistent order
this uses tmux to setup a test cluster that you can easily kill and Go 1.3 randomizes the ordering of keys when iterating over a map.
start for debugging. Sort the keys to make this ordering consistent.
Fixes #38
``` ```
The format can be described more formally as follows: The format can be described more formally as follows:
``` ```
<type>(<scope>): <subject> <subsystem>: <what changed>
<BLANK LINE> <BLANK LINE>
<body> <why this change was made>
<BLANK LINE> <BLANK LINE>
<footer> <footer>
``` ```
@@ -63,25 +66,3 @@ The first line is the subject and should be no longer than 70 characters, the
second line is always blank, and other lines should be wrapped at 80 characters. second line is always blank, and other lines should be wrapped at 80 characters.
This allows the message to be easier to read on GitHub as well as in various This allows the message to be easier to read on GitHub as well as in various
git tools. git tools.
#### Subject Line
The subject line contains a succinct description of the change.
#### Allowed `<type>`s
- *feat* (feature)
- *fix* (bug fix)
- *docs* (documentation)
- *style* (formatting, missing semi colons, …)
- *refactor*
- *test* (when adding missing tests)
- *chore* (maintain)
#### Allowed `<scope>`s
Scopes can anything specifying the place of the commit change in the code base -
for example, "api", "store", etc.
For more details on the commit format, see the [AngularJS commit style
guide](https://docs.google.com/a/coreos.com/document/d/1QrDFcIiPjSLDn3EL15IJygNPiHORgU1_OOAqWjiDU5Y/edit#).

View File

@@ -13,7 +13,7 @@ If no **id** field is provided, coreos-cloudinit will ignore this section.
For example, the following cloud-config document... For example, the following cloud-config document...
``` ```yaml
#cloud-config #cloud-config
coreos: coreos:
oem: oem:
@@ -26,7 +26,7 @@ coreos:
...would be rendered to the following `/etc/oem-release`: ...would be rendered to the following `/etc/oem-release`:
``` ```yaml
ID=rackspace ID=rackspace
NAME="Rackspace Cloud Servers" NAME="Rackspace Cloud Servers"
VERSION_ID=168.0.0 VERSION_ID=168.0.0

View File

@@ -1,10 +1,10 @@
# Using Cloud-Config # Using Cloud-Config
CoreOS allows you to declaratively customize various OS-level items, such as network configuration, user accounts, and systemd units. This document describes the full list of items we can configure. The `coreos-cloudinit` program uses these files as it configures the OS after startup or during runtime. CoreOS allows you to declaratively customize various OS-level items, such as network configuration, user accounts, and systemd units. This document describes the full list of items we can configure. The `coreos-cloudinit` program uses these files as it configures the OS after startup or during runtime. Your cloud-config is processed during each boot.
## Configuration File ## Configuration File
The file used by this system initialization program is called a "cloud-config" file. It is inspired by the [cloud-init][cloud-init] project's [cloud-config][cloud-config] file. which is "the defacto multi-distribution package that handles early initialization of a cloud instance" ([cloud-init docs][cloud-init-docs]). Because the cloud-init project includes tools which aren't used by CoreOS, only the relevant subset of its configuration items will be implemented in our cloud-config file. In addition to those, we added a few CoreOS-specific items, such as etcd configuration, OEM definition, and systemd units. The file used by this system initialization program is called a "cloud-config" file. It is inspired by the [cloud-init][cloud-init] project's [cloud-config][cloud-config] file, which is "the defacto multi-distribution package that handles early initialization of a cloud instance" ([cloud-init docs][cloud-init-docs]). Because the cloud-init project includes tools which aren't used by CoreOS, only the relevant subset of its configuration items will be implemented in our cloud-config file. In addition to those, we added a few CoreOS-specific items, such as etcd configuration, OEM definition, and systemd units.
We've designed our implementation to allow the same cloud-config file to work across all of our supported platforms. We've designed our implementation to allow the same cloud-config file to work across all of our supported platforms.
@@ -40,9 +40,9 @@ CoreOS tries to conform to each platform's native method to provide user data. E
#### etcd #### etcd
The `coreos.etcd.*` parameters will be translated to a partial systemd unit acting as an etcd configuration file. The `coreos.etcd.*` parameters will be translated to a partial systemd unit acting as an etcd configuration file.
We can use the templating feature of coreos-cloudinit to automate etcd configuration with the `$private_ipv4` and `$public_ipv4` fields. For example, the following cloud-config document... If the platform environment supports the templating feature of coreos-cloudinit it is possible to automate etcd configuration with the `$private_ipv4` and `$public_ipv4` fields. For example, the following cloud-config document...
``` ```yaml
#cloud-config #cloud-config
coreos: coreos:
@@ -57,7 +57,7 @@ coreos:
...will generate a systemd unit drop-in like this: ...will generate a systemd unit drop-in like this:
``` ```yaml
[Service] [Service]
Environment="ETCD_NAME=node001" Environment="ETCD_NAME=node001"
Environment="ETCD_DISCOVERY=https://discovery.etcd.io/<token>" Environment="ETCD_DISCOVERY=https://discovery.etcd.io/<token>"
@@ -68,13 +68,15 @@ Environment="ETCD_PEER_ADDR=192.0.2.13:7001"
For more information about the available configuration parameters, see the [etcd documentation][etcd-config]. For more information about the available configuration parameters, see the [etcd documentation][etcd-config].
Note that hyphens in the coreos.etcd.* keys are mapped to underscores. Note that hyphens in the coreos.etcd.* keys are mapped to underscores.
_Note: The `$private_ipv4` and `$public_ipv4` substitution variables referenced in other documents are only supported on Amazon EC2, Google Compute Engine, OpenStack, Rackspace, DigitalOcean, and Vagrant._
[etcd-config]: https://github.com/coreos/etcd/blob/master/Documentation/configuration.md [etcd-config]: https://github.com/coreos/etcd/blob/master/Documentation/configuration.md
#### fleet #### fleet
The `coreos.fleet.*` parameters work very similarly to `coreos.etcd.*`, and allow for the configuration of fleet through environment variables. For example, the following cloud-config document... The `coreos.fleet.*` parameters work very similarly to `coreos.etcd.*`, and allow for the configuration of fleet through environment variables. For example, the following cloud-config document...
``` ```yaml
#cloud-config #cloud-config
coreos: coreos:
@@ -85,7 +87,7 @@ coreos:
...will generate a systemd unit drop-in like this: ...will generate a systemd unit drop-in like this:
``` ```yaml
[Service] [Service]
Environment="FLEET_PUBLIC_IP=203.0.113.29" Environment="FLEET_PUBLIC_IP=203.0.113.29"
Environment="FLEET_METADATA=region=us-west" Environment="FLEET_METADATA=region=us-west"
@@ -93,7 +95,7 @@ Environment="FLEET_METADATA=region=us-west"
For more information on fleet configuration, see the [fleet documentation][fleet-config]. For more information on fleet configuration, see the [fleet documentation][fleet-config].
[fleet-config]: https://github.com/coreos/fleet/blob/master/Documentation/configuration.md [fleet-config]: https://github.com/coreos/fleet/blob/master/Documentation/deployment-and-configuration.md#configuration
#### update #### update
@@ -114,7 +116,7 @@ The `reboot-strategy` parameter also affects the behaviour of [locksmith](https:
##### Example ##### Example
``` ```yaml
#cloud-config #cloud-config
coreos: coreos:
update: update:
@@ -123,7 +125,9 @@ coreos:
#### units #### units
The `coreos.units.*` parameters define a list of arbitrary systemd units to start. Each item is an object with the following fields: The `coreos.units.*` parameters define a list of arbitrary systemd units to start after booting. This feature is intended to help you start essential services required to mount storage and configure networking in order to join the CoreOS cluster. It is not intended to be a Chef/Puppet replacement.
Each item is an object with the following fields:
- **name**: String representing unit's name. Required. - **name**: String representing unit's name. Required.
- **runtime**: Boolean indicating whether or not to persist the unit across reboots. This is analogous to the `--runtime` argument to `systemctl enable`. Default value is false. - **runtime**: Boolean indicating whether or not to persist the unit across reboots. This is analogous to the `--runtime` argument to `systemctl enable`. Default value is false.
@@ -138,7 +142,7 @@ The `coreos.units.*` parameters define a list of arbitrary systemd units to star
Write a unit to disk, automatically starting it. Write a unit to disk, automatically starting it.
``` ```yaml
#cloud-config #cloud-config
coreos: coreos:
@@ -159,7 +163,7 @@ coreos:
Start the built-in `etcd` and `fleet` services: Start the built-in `etcd` and `fleet` services:
``` ```yaml
#cloud-config #cloud-config
coreos: coreos:
@@ -177,7 +181,7 @@ The `ssh_authorized_keys` parameter adds public SSH keys which will be authorize
The keys will be named "coreos-cloudinit" by default. The keys will be named "coreos-cloudinit" by default.
Override this by using the `--ssh-key-name` flag when calling `coreos-cloudinit`. Override this by using the `--ssh-key-name` flag when calling `coreos-cloudinit`.
``` ```yaml
#cloud-config #cloud-config
ssh_authorized_keys: ssh_authorized_keys:
@@ -189,7 +193,7 @@ ssh_authorized_keys:
The `hostname` parameter defines the system's hostname. The `hostname` parameter defines the system's hostname.
This is the local part of a fully-qualified domain name (i.e. `foo` in `foo.example.com`). This is the local part of a fully-qualified domain name (i.e. `foo` in `foo.example.com`).
``` ```yaml
#cloud-config #cloud-config
hostname: coreos1 hostname: coreos1
@@ -203,7 +207,7 @@ All but the `passwd` and `ssh-authorized-keys` fields will be ignored if the use
- **name**: Required. Login name of user - **name**: Required. Login name of user
- **gecos**: GECOS comment of user - **gecos**: GECOS comment of user
- **passwd**: Hash of the password to use for this user - **passwd**: Hash of the password to use for this user
- **homedir**: User's home directory. Defaults to /home/<name> - **homedir**: User's home directory. Defaults to /home/\<name\>
- **no-create-home**: Boolean. Skip home directory creation. - **no-create-home**: Boolean. Skip home directory creation.
- **primary-group**: Default group for the user. Defaults to a new group created named after the user. - **primary-group**: Default group for the user. Defaults to a new group created named after the user.
- **groups**: Add user to these additional groups - **groups**: Add user to these additional groups
@@ -222,7 +226,7 @@ The following fields are not yet implemented:
- **selinux-user**: Corresponding SELinux user - **selinux-user**: Corresponding SELinux user
- **ssh-import-id**: Import SSH keys by ID from Launchpad. - **ssh-import-id**: Import SSH keys by ID from Launchpad.
``` ```yaml
#cloud-config #cloud-config
users: users:
@@ -261,7 +265,7 @@ Using a higher number of rounds will help create more secure passwords, but give
Using the `coreos-ssh-import-github` field, we can import public SSH keys from a GitHub user to use as authorized keys to a server. Using the `coreos-ssh-import-github` field, we can import public SSH keys from a GitHub user to use as authorized keys to a server.
``` ```yaml
#cloud-config #cloud-config
users: users:
@@ -274,7 +278,7 @@ users:
We can also pull public SSH keys from any HTTP endpoint which matches [GitHub's API response format](https://developer.github.com/v3/users/keys/#list-public-keys-for-a-user). We can also pull public SSH keys from any HTTP endpoint which matches [GitHub's API response format](https://developer.github.com/v3/users/keys/#list-public-keys-for-a-user).
For example, if you have an installation of GitHub Enterprise, you can provide a complete URL with an authentication token: For example, if you have an installation of GitHub Enterprise, you can provide a complete URL with an authentication token:
``` ```yaml
#cloud-config #cloud-config
users: users:
@@ -284,7 +288,7 @@ users:
You can also specify any URL whose response matches the JSON format for public keys: You can also specify any URL whose response matches the JSON format for public keys:
``` ```yaml
#cloud-config #cloud-config
users: users:
@@ -294,7 +298,8 @@ users:
### write_files ### write_files
The `write-file` parameter defines a list of files to create on the local filesystem. Each file is represented as an associative array which has the following keys: The `write_files` directive defines a set of files to create on the local filesystem.
Each item in the list may have the following keys:
- **path**: Absolute location on disk where contents should be written - **path**: Absolute location on disk where contents should be written
- **content**: Data to write at the provided `path` - **content**: Data to write at the provided `path`
@@ -304,14 +309,19 @@ The `write-file` parameter defines a list of files to create on the local filesy
Explicitly not implemented is the **encoding** attribute. Explicitly not implemented is the **encoding** attribute.
The **content** field must represent exactly what should be written to disk. The **content** field must represent exactly what should be written to disk.
``` ```yaml
#cloud-config #cloud-config
write_files: write_files:
- path: /etc/fleet/fleet.conf - path: /etc/resolv.conf
permissions: 0644 permissions: 0644
owner: root
content: | content: |
verbosity=1 nameserver 8.8.8.8
metadata="region=us-west,type=ssd" - path: /etc/motd
permissions: 0644
owner: root
content: |
Good news, everyone!
``` ```
### manage_etc_hosts ### manage_etc_hosts
@@ -321,7 +331,7 @@ Currently, the only supported value is "localhost" which will cause your system'
to resolve to "127.0.0.1". This is helpful when the host does not have DNS to resolve to "127.0.0.1". This is helpful when the host does not have DNS
infrastructure in place to resolve its own hostname, for example, when using Vagrant. infrastructure in place to resolve its own hostname, for example, when using Vagrant.
``` ```yaml
#cloud-config #cloud-config
manage_etc_hosts: localhost manage_etc_hosts: localhost

View File

@@ -14,17 +14,21 @@ The image should be a single FAT or ISO9660 file system with the label
For example, to wrap up a config named `user_data` in a config drive image: For example, to wrap up a config named `user_data` in a config drive image:
mkdir -p /tmp/new-drive/openstack/latest ```sh
cp user_data /tmp/new-drive/openstack/latest/user_data mkdir -p /tmp/new-drive/openstack/latest
mkisofs -R -V config-2 -o configdrive.iso /tmp/new-drive cp user_data /tmp/new-drive/openstack/latest/user_data
rm -r /tmp/new-drive mkisofs -R -V config-2 -o configdrive.iso /tmp/new-drive
rm -r /tmp/new-drive
```
## QEMU virtfs ## QEMU virtfs
One exception to the above, when using QEMU it is possible to skip creating an One exception to the above, when using QEMU it is possible to skip creating an
image and use a plain directory containing the same contents: image and use a plain directory containing the same contents:
qemu-system-x86_64 \ ```sh
qemu-system-x86_64 \
-fsdev local,id=conf,security_model=none,readonly,path=/tmp/new-drive \ -fsdev local,id=conf,security_model=none,readonly,path=/tmp/new-drive \
-device virtio-9p-pci,fsdev=conf,mount_tag=config-2 \ -device virtio-9p-pci,fsdev=conf,mount_tag=config-2 \
[usual qemu options here...] [usual qemu options here...]
```

3
MAINTAINERS Normal file
View File

@@ -0,0 +1,3 @@
Alex Crawford <alex.crawford@coreos.com> (@crawford)
Jonathan Boulle <jonathan.boulle@coreos.com> (@jonboulle)
Brian Waldon <brian.waldon@coreos.com> (@bcwaldon)

View File

@@ -8,98 +8,138 @@ import (
"time" "time"
"github.com/coreos/coreos-cloudinit/datasource" "github.com/coreos/coreos-cloudinit/datasource"
"github.com/coreos/coreos-cloudinit/datasource/configdrive"
"github.com/coreos/coreos-cloudinit/datasource/file"
"github.com/coreos/coreos-cloudinit/datasource/metadata/cloudsigma"
"github.com/coreos/coreos-cloudinit/datasource/metadata/digitalocean"
"github.com/coreos/coreos-cloudinit/datasource/metadata/ec2"
"github.com/coreos/coreos-cloudinit/datasource/proc_cmdline"
"github.com/coreos/coreos-cloudinit/datasource/url"
"github.com/coreos/coreos-cloudinit/initialize" "github.com/coreos/coreos-cloudinit/initialize"
"github.com/coreos/coreos-cloudinit/pkg" "github.com/coreos/coreos-cloudinit/pkg"
"github.com/coreos/coreos-cloudinit/system" "github.com/coreos/coreos-cloudinit/system"
) )
const ( const (
version = "0.8.9" version = "0.10.1"
datasourceInterval = 100 * time.Millisecond datasourceInterval = 100 * time.Millisecond
datasourceMaxInterval = 30 * time.Second datasourceMaxInterval = 30 * time.Second
datasourceTimeout = 5 * time.Minute datasourceTimeout = 5 * time.Minute
) )
var ( var (
flags = struct {
printVersion bool printVersion bool
ignoreFailure bool ignoreFailure bool
sources struct { sources struct {
file string file string
configDrive string configDrive string
metadataService bool metadataService bool
ec2MetadataService string
cloudSigmaMetadataService bool
digitalOceanMetadataService string
url string url string
procCmdLine bool procCmdLine bool
} }
convertNetconf string convertNetconf string
workspace string workspace string
sshKeyName string sshKeyName string
oem string
}{}
) )
func init() { func init() {
flag.BoolVar(&printVersion, "version", false, "Print the version and exit") flag.BoolVar(&flags.printVersion, "version", false, "Print the version and exit")
flag.BoolVar(&ignoreFailure, "ignore-failure", false, "Exits with 0 status in the event of malformed input from user-data") flag.BoolVar(&flags.ignoreFailure, "ignore-failure", false, "Exits with 0 status in the event of malformed input from user-data")
flag.StringVar(&sources.file, "from-file", "", "Read user-data from provided file") flag.StringVar(&flags.sources.file, "from-file", "", "Read user-data from provided file")
flag.StringVar(&sources.configDrive, "from-configdrive", "", "Read data from provided cloud-drive directory") flag.StringVar(&flags.sources.configDrive, "from-configdrive", "", "Read data from provided cloud-drive directory")
flag.BoolVar(&sources.metadataService, "from-metadata-service", false, "Download data from metadata service") flag.BoolVar(&flags.sources.metadataService, "from-metadata-service", false, "[DEPRECATED - Use -from-ec2-metadata] Download data from metadata service")
flag.StringVar(&sources.url, "from-url", "", "Download user-data from provided url") flag.StringVar(&flags.sources.ec2MetadataService, "from-ec2-metadata", "", "Download EC2 data from the provided url")
flag.BoolVar(&sources.procCmdLine, "from-proc-cmdline", false, fmt.Sprintf("Parse %s for '%s=<url>', using the cloud-config served by an HTTP GET to <url>", datasource.ProcCmdlineLocation, datasource.ProcCmdlineCloudConfigFlag)) flag.BoolVar(&flags.sources.cloudSigmaMetadataService, "from-cloudsigma-metadata", false, "Download data from CloudSigma server context")
flag.StringVar(&convertNetconf, "convert-netconf", "", "Read the network config provided in cloud-drive and translate it from the specified format into networkd unit files (requires the -from-configdrive flag)") flag.StringVar(&flags.sources.digitalOceanMetadataService, "from-digitalocean-metadata", "", "Download DigitalOcean data from the provided url")
flag.StringVar(&workspace, "workspace", "/var/lib/coreos-cloudinit", "Base directory coreos-cloudinit should use to store data") flag.StringVar(&flags.sources.url, "from-url", "", "Download user-data from provided url")
flag.StringVar(&sshKeyName, "ssh-key-name", initialize.DefaultSSHKeyName, "Add SSH keys to the system with the given name") flag.BoolVar(&flags.sources.procCmdLine, "from-proc-cmdline", false, fmt.Sprintf("Parse %s for '%s=<url>', using the cloud-config served by an HTTP GET to <url>", proc_cmdline.ProcCmdlineLocation, proc_cmdline.ProcCmdlineCloudConfigFlag))
flag.StringVar(&flags.oem, "oem", "", "Use the settings specific to the provided OEM")
flag.StringVar(&flags.convertNetconf, "convert-netconf", "", "Read the network config provided in cloud-drive and translate it from the specified format into networkd unit files")
flag.StringVar(&flags.workspace, "workspace", "/var/lib/coreos-cloudinit", "Base directory coreos-cloudinit should use to store data")
flag.StringVar(&flags.sshKeyName, "ssh-key-name", initialize.DefaultSSHKeyName, "Add SSH keys to the system with the given name")
} }
type oemConfig map[string]string
var (
oemConfigs = map[string]oemConfig{
"digitalocean": oemConfig{
"from-digitalocean-metadata": "http://169.254.169.254/",
"convert-netconf": "digitalocean",
},
"ec2-compat": oemConfig{
"from-ec2-metadata": "http://169.254.169.254/",
"from-configdrive": "/media/configdrive",
},
"rackspace-onmetal": oemConfig{
"from-configdrive": "/media/configdrive",
"convert-netconf": "debian",
},
}
)
func main() { func main() {
failure := false
flag.Parse() flag.Parse()
die := func() { if c, ok := oemConfigs[flags.oem]; ok {
if ignoreFailure { for k, v := range c {
os.Exit(0) flag.Set(k, v)
} }
os.Exit(1) } else if flags.oem != "" {
oems := make([]string, 0, len(oemConfigs))
for k := range oemConfigs {
oems = append(oems, k)
}
fmt.Printf("Invalid option to --oem: %q. Supported options: %q\n", flags.oem, oems)
os.Exit(2)
} }
if printVersion == true { if flags.printVersion == true {
fmt.Printf("coreos-cloudinit version %s\n", version) fmt.Printf("coreos-cloudinit version %s\n", version)
os.Exit(0) os.Exit(0)
} }
if convertNetconf != "" && sources.configDrive == "" { switch flags.convertNetconf {
fmt.Println("-convert-netconf flag requires -from-configdrive")
os.Exit(1)
}
switch convertNetconf {
case "": case "":
case "debian": case "debian":
case "digitalocean":
default: default:
fmt.Printf("Invalid option to -convert-netconf: '%s'. Supported options: 'debian'\n", convertNetconf) fmt.Printf("Invalid option to -convert-netconf: '%s'. Supported options: 'debian, digitalocean'\n", flags.convertNetconf)
os.Exit(1) os.Exit(2)
} }
dss := getDatasources() dss := getDatasources()
if len(dss) == 0 { if len(dss) == 0 {
fmt.Println("Provide at least one of --from-file, --from-configdrive, --from-metadata-service, --from-url or --from-proc-cmdline") fmt.Println("Provide at least one of --from-file, --from-configdrive, --from-ec2-metadata, --from-cloudsigma-metadata, --from-url or --from-proc-cmdline")
os.Exit(1) os.Exit(2)
} }
ds := selectDatasource(dss) ds := selectDatasource(dss)
if ds == nil { if ds == nil {
fmt.Println("No datasources available in time") fmt.Println("No datasources available in time")
die() os.Exit(1)
} }
fmt.Printf("Fetching user-data from datasource of type %q\n", ds.Type()) fmt.Printf("Fetching user-data from datasource of type %q\n", ds.Type())
userdataBytes, err := ds.FetchUserdata() userdataBytes, err := ds.FetchUserdata()
if err != nil { if err != nil {
fmt.Printf("Failed fetching user-data from datasource: %v\n", err) fmt.Printf("Failed fetching user-data from datasource: %v\nContinuing...\n", err)
die() failure = true
} }
fmt.Printf("Fetching meta-data from datasource of type %q\n", ds.Type()) fmt.Printf("Fetching meta-data from datasource of type %q\n", ds.Type())
metadataBytes, err := ds.FetchMetadata() metadataBytes, err := ds.FetchMetadata()
if err != nil { if err != nil {
fmt.Printf("Failed fetching meta-data from datasource: %v\n", err) fmt.Printf("Failed fetching meta-data from datasource: %v\n", err)
die() os.Exit(1)
} }
// Extract IPv4 addresses from metadata if possible // Extract IPv4 addresses from metadata if possible
@@ -108,23 +148,34 @@ func main() {
subs, err = initialize.ExtractIPsFromMetadata(metadataBytes) subs, err = initialize.ExtractIPsFromMetadata(metadataBytes)
if err != nil { if err != nil {
fmt.Printf("Failed extracting IPs from meta-data: %v\n", err) fmt.Printf("Failed extracting IPs from meta-data: %v\n", err)
die() os.Exit(1)
} }
} }
// Apply environment to user-data // Apply environment to user-data
env := initialize.NewEnvironment("/", ds.ConfigRoot(), workspace, convertNetconf, sshKeyName, subs) env := initialize.NewEnvironment("/", ds.ConfigRoot(), flags.workspace, flags.convertNetconf, flags.sshKeyName, subs)
userdata := env.Apply(string(userdataBytes)) userdata := env.Apply(string(userdataBytes))
var ccm, ccu *initialize.CloudConfig var ccm, ccu *initialize.CloudConfig
var script *system.Script var script *system.Script
if ccm, err = initialize.ParseMetaData(string(metadataBytes)); err != nil { if ccm, err = initialize.ParseMetaData(string(metadataBytes)); err != nil {
fmt.Printf("Failed to parse meta-data: %v\n", err) fmt.Printf("Failed to parse meta-data: %v\n", err)
die() os.Exit(1)
} }
if ccm != nil && ccm.NetworkConfigPath != "" {
fmt.Printf("Fetching network config from datasource of type %q\n", ds.Type())
netconfBytes, err := ds.FetchNetworkConfig(ccm.NetworkConfigPath)
if err != nil {
fmt.Printf("Failed fetching network config from datasource: %v\n", err)
os.Exit(1)
}
ccm.NetworkConfig = string(netconfBytes)
}
if ud, err := initialize.ParseUserData(userdata); err != nil { if ud, err := initialize.ParseUserData(userdata); err != nil {
fmt.Printf("Failed to parse user-data: %v\n", err) fmt.Printf("Failed to parse user-data: %v\nContinuing...\n", err)
die() failure = true
} else { } else {
switch t := ud.(type) { switch t := ud.(type) {
case *initialize.CloudConfig: case *initialize.CloudConfig:
@@ -152,16 +203,20 @@ func main() {
if cc != nil { if cc != nil {
if err = initialize.Apply(*cc, env); err != nil { if err = initialize.Apply(*cc, env); err != nil {
fmt.Printf("Failed to apply cloud-config: %v\n", err) fmt.Printf("Failed to apply cloud-config: %v\n", err)
die() os.Exit(1)
} }
} }
if script != nil { if script != nil {
if err = runScript(*script, env); err != nil { if err = runScript(*script, env); err != nil {
fmt.Printf("Failed to run script: %v\n", err) fmt.Printf("Failed to run script: %v\n", err)
die() os.Exit(1)
} }
} }
if failure && !flags.ignoreFailure {
os.Exit(1)
}
} }
// mergeCloudConfig merges certain options from mdcc (a CloudConfig derived from // mergeCloudConfig merges certain options from mdcc (a CloudConfig derived from
@@ -172,7 +227,7 @@ func main() {
func mergeCloudConfig(mdcc, udcc initialize.CloudConfig) (cc initialize.CloudConfig) { func mergeCloudConfig(mdcc, udcc initialize.CloudConfig) (cc initialize.CloudConfig) {
if mdcc.Hostname != "" { if mdcc.Hostname != "" {
if udcc.Hostname != "" { if udcc.Hostname != "" {
fmt.Printf("Warning: user-data hostname (%s) overrides metadata hostname (%s)", udcc.Hostname, mdcc.Hostname) fmt.Printf("Warning: user-data hostname (%s) overrides metadata hostname (%s)\n", udcc.Hostname, mdcc.Hostname)
} else { } else {
udcc.Hostname = mdcc.Hostname udcc.Hostname = mdcc.Hostname
} }
@@ -183,11 +238,18 @@ func mergeCloudConfig(mdcc, udcc initialize.CloudConfig) (cc initialize.CloudCon
} }
if mdcc.NetworkConfigPath != "" { if mdcc.NetworkConfigPath != "" {
if udcc.NetworkConfigPath != "" { if udcc.NetworkConfigPath != "" {
fmt.Printf("Warning: user-data NetworkConfigPath %s overrides metadata NetworkConfigPath %s", udcc.NetworkConfigPath, mdcc.NetworkConfigPath) fmt.Printf("Warning: user-data NetworkConfigPath %s overrides metadata NetworkConfigPath %s\n", udcc.NetworkConfigPath, mdcc.NetworkConfigPath)
} else { } else {
udcc.NetworkConfigPath = mdcc.NetworkConfigPath udcc.NetworkConfigPath = mdcc.NetworkConfigPath
} }
} }
if mdcc.NetworkConfig != "" {
if udcc.NetworkConfig != "" {
fmt.Printf("Warning: user-data NetworkConfig %s overrides metadata NetworkConfig %s\n", udcc.NetworkConfig, mdcc.NetworkConfig)
} else {
udcc.NetworkConfig = mdcc.NetworkConfig
}
}
return udcc return udcc
} }
@@ -195,20 +257,29 @@ func mergeCloudConfig(mdcc, udcc initialize.CloudConfig) (cc initialize.CloudCon
// on the different source command-line flags. // on the different source command-line flags.
func getDatasources() []datasource.Datasource { func getDatasources() []datasource.Datasource {
dss := make([]datasource.Datasource, 0, 5) dss := make([]datasource.Datasource, 0, 5)
if sources.file != "" { if flags.sources.file != "" {
dss = append(dss, datasource.NewLocalFile(sources.file)) dss = append(dss, file.NewDatasource(flags.sources.file))
} }
if sources.url != "" { if flags.sources.url != "" {
dss = append(dss, datasource.NewRemoteFile(sources.url)) dss = append(dss, url.NewDatasource(flags.sources.url))
} }
if sources.configDrive != "" { if flags.sources.configDrive != "" {
dss = append(dss, datasource.NewConfigDrive(sources.configDrive)) dss = append(dss, configdrive.NewDatasource(flags.sources.configDrive))
} }
if sources.metadataService { if flags.sources.metadataService {
dss = append(dss, datasource.NewMetadataService()) dss = append(dss, ec2.NewDatasource(ec2.DefaultAddress))
} }
if sources.procCmdLine { if flags.sources.ec2MetadataService != "" {
dss = append(dss, datasource.NewProcCmdline()) dss = append(dss, ec2.NewDatasource(flags.sources.ec2MetadataService))
}
if flags.sources.cloudSigmaMetadataService {
dss = append(dss, cloudsigma.NewServerContextService())
}
if flags.sources.digitalOceanMetadataService != "" {
dss = append(dss, digitalocean.NewDatasource(flags.sources.digitalOceanMetadataService))
}
if flags.sources.procCmdLine {
dss = append(dss, proc_cmdline.NewDatasource())
} }
return dss return dss
} }
@@ -240,7 +311,7 @@ func selectDatasource(sources []datasource.Datasource) datasource.Datasource {
select { select {
case <-stop: case <-stop:
return return
case <-time.Tick(duration): case <-time.After(duration):
duration = pkg.ExpBackoff(duration, datasourceMaxInterval) duration = pkg.ExpBackoff(duration, datasourceMaxInterval)
} }
} }
@@ -257,7 +328,7 @@ func selectDatasource(sources []datasource.Datasource) datasource.Datasource {
select { select {
case s = <-ds: case s = <-ds:
case <-done: case <-done:
case <-time.Tick(datasourceTimeout): case <-time.After(datasourceTimeout):
} }
close(stop) close(stop)

View File

@@ -12,6 +12,7 @@ func TestMergeCloudConfig(t *testing.T) {
SSHAuthorizedKeys: []string{"abc", "def"}, SSHAuthorizedKeys: []string{"abc", "def"},
Hostname: "foobar", Hostname: "foobar",
NetworkConfigPath: "/path/somewhere", NetworkConfigPath: "/path/somewhere",
NetworkConfig: `{}`,
} }
for i, tt := range []struct { for i, tt := range []struct {
udcc initialize.CloudConfig udcc initialize.CloudConfig
@@ -36,6 +37,7 @@ func TestMergeCloudConfig(t *testing.T) {
initialize.CloudConfig{ initialize.CloudConfig{
Hostname: "meta-hostname", Hostname: "meta-hostname",
NetworkConfigPath: "/path/meta", NetworkConfigPath: "/path/meta",
NetworkConfig: `{"hostname":"test"}`,
}, },
simplecc, simplecc,
}, },
@@ -45,6 +47,7 @@ func TestMergeCloudConfig(t *testing.T) {
SSHAuthorizedKeys: []string{"abc", "def"}, SSHAuthorizedKeys: []string{"abc", "def"},
Hostname: "user-hostname", Hostname: "user-hostname",
NetworkConfigPath: "/path/somewhere", NetworkConfigPath: "/path/somewhere",
NetworkConfig: `{"hostname":"test"}`,
}, },
initialize.CloudConfig{ initialize.CloudConfig{
SSHAuthorizedKeys: []string{"woof", "qux"}, SSHAuthorizedKeys: []string{"woof", "qux"},
@@ -54,6 +57,7 @@ func TestMergeCloudConfig(t *testing.T) {
SSHAuthorizedKeys: []string{"abc", "def", "woof", "qux"}, SSHAuthorizedKeys: []string{"abc", "def", "woof", "qux"},
Hostname: "user-hostname", Hostname: "user-hostname",
NetworkConfigPath: "/path/somewhere", NetworkConfigPath: "/path/somewhere",
NetworkConfig: `{"hostname":"test"}`,
}, },
}, },
{ {
@@ -64,11 +68,13 @@ func TestMergeCloudConfig(t *testing.T) {
initialize.CloudConfig{ initialize.CloudConfig{
SSHAuthorizedKeys: []string{"zaphod", "beeblebrox"}, SSHAuthorizedKeys: []string{"zaphod", "beeblebrox"},
NetworkConfigPath: "/dev/fun", NetworkConfigPath: "/dev/fun",
NetworkConfig: `{"hostname":"test"}`,
}, },
initialize.CloudConfig{ initialize.CloudConfig{
Hostname: "supercool", Hostname: "supercool",
SSHAuthorizedKeys: []string{"zaphod", "beeblebrox"}, SSHAuthorizedKeys: []string{"zaphod", "beeblebrox"},
NetworkConfigPath: "/dev/fun", NetworkConfigPath: "/dev/fun",
NetworkConfig: `{"hostname":"test"}`,
}, },
}, },
{ {
@@ -80,11 +86,13 @@ func TestMergeCloudConfig(t *testing.T) {
initialize.CloudConfig{ initialize.CloudConfig{
Hostname: "youyouyou", Hostname: "youyouyou",
NetworkConfigPath: "meta-meta-yo", NetworkConfigPath: "meta-meta-yo",
NetworkConfig: `{"hostname":"test"}`,
}, },
initialize.CloudConfig{ initialize.CloudConfig{
Hostname: "mememe", Hostname: "mememe",
ManageEtcHosts: initialize.EtcHosts("lolz"), ManageEtcHosts: initialize.EtcHosts("lolz"),
NetworkConfigPath: "meta-meta-yo", NetworkConfigPath: "meta-meta-yo",
NetworkConfig: `{"hostname":"test"}`,
}, },
}, },
{ {
@@ -95,10 +103,12 @@ func TestMergeCloudConfig(t *testing.T) {
initialize.CloudConfig{ initialize.CloudConfig{
ManageEtcHosts: initialize.EtcHosts("lolz"), ManageEtcHosts: initialize.EtcHosts("lolz"),
NetworkConfigPath: "meta-meta-yo", NetworkConfigPath: "meta-meta-yo",
NetworkConfig: `{"hostname":"test"}`,
}, },
initialize.CloudConfig{ initialize.CloudConfig{
Hostname: "mememe", Hostname: "mememe",
NetworkConfigPath: "meta-meta-yo", NetworkConfigPath: "meta-meta-yo",
NetworkConfig: `{"hostname":"test"}`,
}, },
}, },
} { } {

View File

@@ -1,48 +0,0 @@
package datasource
import (
"io/ioutil"
"os"
"path"
)
type configDrive struct {
root string
}
func NewConfigDrive(root string) *configDrive {
return &configDrive{path.Join(root, "openstack")}
}
func (cd *configDrive) IsAvailable() bool {
_, err := os.Stat(cd.root)
return !os.IsNotExist(err)
}
func (cd *configDrive) AvailabilityChanges() bool {
return true
}
func (cd *configDrive) ConfigRoot() string {
return cd.root
}
func (cd *configDrive) FetchMetadata() ([]byte, error) {
return cd.readFile("meta_data.json")
}
func (cd *configDrive) FetchUserdata() ([]byte, error) {
return cd.readFile("user_data")
}
func (cd *configDrive) Type() string {
return "cloud-drive"
}
func (cd *configDrive) readFile(filename string) ([]byte, error) {
data, err := ioutil.ReadFile(path.Join(cd.root, "latest", filename))
if os.IsNotExist(err) {
err = nil
}
return data, err
}

View File

@@ -0,0 +1,67 @@
package configdrive
import (
"fmt"
"io/ioutil"
"os"
"path"
)
const (
openstackApiVersion = "latest"
)
type configDrive struct {
root string
readFile func(filename string) ([]byte, error)
}
func NewDatasource(root string) *configDrive {
return &configDrive{root, ioutil.ReadFile}
}
func (cd *configDrive) IsAvailable() bool {
_, err := os.Stat(cd.root)
return !os.IsNotExist(err)
}
func (cd *configDrive) AvailabilityChanges() bool {
return true
}
func (cd *configDrive) ConfigRoot() string {
return cd.openstackRoot()
}
func (cd *configDrive) FetchMetadata() ([]byte, error) {
return cd.tryReadFile(path.Join(cd.openstackVersionRoot(), "meta_data.json"))
}
func (cd *configDrive) FetchUserdata() ([]byte, error) {
return cd.tryReadFile(path.Join(cd.openstackVersionRoot(), "user_data"))
}
func (cd *configDrive) FetchNetworkConfig(filename string) ([]byte, error) {
return cd.tryReadFile(path.Join(cd.openstackRoot(), filename))
}
func (cd *configDrive) Type() string {
return "cloud-drive"
}
func (cd *configDrive) openstackRoot() string {
return path.Join(cd.root, "openstack")
}
func (cd *configDrive) openstackVersionRoot() string {
return path.Join(cd.openstackRoot(), openstackApiVersion)
}
func (cd *configDrive) tryReadFile(filename string) ([]byte, error) {
fmt.Printf("Attempting to read from %q\n", filename)
data, err := cd.readFile(filename)
if os.IsNotExist(err) {
err = nil
}
return data, err
}

View File

@@ -0,0 +1,125 @@
package configdrive
import (
"os"
"testing"
)
type mockFilesystem []string
func (m mockFilesystem) readFile(filename string) ([]byte, error) {
for _, file := range m {
if file == filename {
return []byte(filename), nil
}
}
return nil, os.ErrNotExist
}
func TestFetchMetadata(t *testing.T) {
for _, tt := range []struct {
root string
filename string
files mockFilesystem
}{
{
"/",
"",
mockFilesystem{},
},
{
"/",
"/openstack/latest/meta_data.json",
mockFilesystem([]string{"/openstack/latest/meta_data.json"}),
},
{
"/media/configdrive",
"/media/configdrive/openstack/latest/meta_data.json",
mockFilesystem([]string{"/media/configdrive/openstack/latest/meta_data.json"}),
},
} {
cd := configDrive{tt.root, tt.files.readFile}
filename, err := cd.FetchMetadata()
if err != nil {
t.Fatalf("bad error for %q: want %q, got %q", tt, nil, err)
}
if string(filename) != tt.filename {
t.Fatalf("bad path for %q: want %q, got %q", tt, tt.filename, filename)
}
}
}
func TestFetchUserdata(t *testing.T) {
for _, tt := range []struct {
root string
filename string
files mockFilesystem
}{
{
"/",
"",
mockFilesystem{},
},
{
"/",
"/openstack/latest/user_data",
mockFilesystem([]string{"/openstack/latest/user_data"}),
},
{
"/media/configdrive",
"/media/configdrive/openstack/latest/user_data",
mockFilesystem([]string{"/media/configdrive/openstack/latest/user_data"}),
},
} {
cd := configDrive{tt.root, tt.files.readFile}
filename, err := cd.FetchUserdata()
if err != nil {
t.Fatalf("bad error for %q: want %q, got %q", tt, nil, err)
}
if string(filename) != tt.filename {
t.Fatalf("bad path for %q: want %q, got %q", tt, tt.filename, filename)
}
}
}
func TestConfigRoot(t *testing.T) {
for _, tt := range []struct {
root string
configRoot string
}{
{
"/",
"/openstack",
},
{
"/media/configdrive",
"/media/configdrive/openstack",
},
} {
cd := configDrive{tt.root, nil}
if configRoot := cd.ConfigRoot(); configRoot != tt.configRoot {
t.Fatalf("bad config root for %q: want %q, got %q", tt, tt.configRoot, configRoot)
}
}
}
func TestNewDatasource(t *testing.T) {
for _, tt := range []struct {
root string
expectRoot string
}{
{
root: "",
expectRoot: "",
},
{
root: "/media/configdrive",
expectRoot: "/media/configdrive",
},
} {
service := NewDatasource(tt.root)
if service.root != tt.expectRoot {
t.Fatalf("bad root (%q): want %q, got %q", tt.root, tt.expectRoot, service.root)
}
}
}

View File

@@ -6,5 +6,6 @@ type Datasource interface {
ConfigRoot() string ConfigRoot() string
FetchMetadata() ([]byte, error) FetchMetadata() ([]byte, error)
FetchUserdata() ([]byte, error) FetchUserdata() ([]byte, error)
FetchNetworkConfig(string) ([]byte, error)
Type() string Type() string
} }

View File

@@ -1,4 +1,4 @@
package datasource package file
import ( import (
"io/ioutil" "io/ioutil"
@@ -9,7 +9,7 @@ type localFile struct {
path string path string
} }
func NewLocalFile(path string) *localFile { func NewDatasource(path string) *localFile {
return &localFile{path} return &localFile{path}
} }
@@ -34,6 +34,10 @@ func (f *localFile) FetchUserdata() ([]byte, error) {
return ioutil.ReadFile(f.path) return ioutil.ReadFile(f.path)
} }
func (f *localFile) FetchNetworkConfig(filename string) ([]byte, error) {
return nil, nil
}
func (f *localFile) Type() string { func (f *localFile) Type() string {
return "local-file" return "local-file"
} }

View File

@@ -0,0 +1,145 @@
package cloudsigma
import (
"encoding/base64"
"encoding/json"
"os"
"strings"
"github.com/coreos/coreos-cloudinit/third_party/github.com/cloudsigma/cepgo"
)
const (
userDataFieldName = "cloudinit-user-data"
)
type serverContextService struct {
client interface {
All() (interface{}, error)
Key(string) (interface{}, error)
Meta() (map[string]string, error)
FetchRaw(string) ([]byte, error)
}
}
func NewServerContextService() *serverContextService {
return &serverContextService{
client: cepgo.NewCepgo(),
}
}
func (_ *serverContextService) IsAvailable() bool {
productNameFile, err := os.Open("/sys/class/dmi/id/product_name")
if err != nil {
return false
}
productName := make([]byte, 10)
_, err = productNameFile.Read(productName)
return err == nil && string(productName) == "CloudSigma"
}
func (_ *serverContextService) AvailabilityChanges() bool {
return true
}
func (_ *serverContextService) ConfigRoot() string {
return ""
}
func (_ *serverContextService) Type() string {
return "server-context"
}
func (scs *serverContextService) FetchMetadata() ([]byte, error) {
var (
inputMetadata struct {
Name string `json:"name"`
UUID string `json:"uuid"`
Meta map[string]string `json:"meta"`
Nics []struct {
Runtime struct {
InterfaceType string `json:"interface_type"`
IPv4 struct {
IP string `json:"uuid"`
} `json:"ip_v4"`
} `json:"runtime"`
} `json:"nics"`
}
outputMetadata struct {
Hostname string `json:"name"`
PublicKeys map[string]string `json:"public_keys"`
LocalIPv4 string `json:"local-ipv4"`
PublicIPv4 string `json:"public-ipv4"`
}
)
rawMetadata, err := scs.client.FetchRaw("")
if err != nil {
return []byte{}, err
}
err = json.Unmarshal(rawMetadata, &inputMetadata)
if err != nil {
return []byte{}, err
}
if inputMetadata.Name != "" {
outputMetadata.Hostname = inputMetadata.Name
} else {
outputMetadata.Hostname = inputMetadata.UUID
}
if key, ok := inputMetadata.Meta["ssh_public_key"]; ok {
splitted := strings.Split(key, " ")
outputMetadata.PublicKeys = make(map[string]string)
outputMetadata.PublicKeys[splitted[len(splitted)-1]] = key
}
for _, nic := range inputMetadata.Nics {
if nic.Runtime.IPv4.IP != "" {
if nic.Runtime.InterfaceType == "public" {
outputMetadata.PublicIPv4 = nic.Runtime.IPv4.IP
} else {
outputMetadata.LocalIPv4 = nic.Runtime.IPv4.IP
}
}
}
return json.Marshal(outputMetadata)
}
func (scs *serverContextService) FetchUserdata() ([]byte, error) {
metadata, err := scs.client.Meta()
if err != nil {
return []byte{}, err
}
userData, ok := metadata[userDataFieldName]
if ok && isBase64Encoded(userDataFieldName, metadata) {
if decodedUserData, err := base64.StdEncoding.DecodeString(userData); err == nil {
return decodedUserData, nil
} else {
return []byte{}, nil
}
}
return []byte(userData), nil
}
func (scs *serverContextService) FetchNetworkConfig(a string) ([]byte, error) {
return nil, nil
}
func isBase64Encoded(field string, userdata map[string]string) bool {
base64Fields, ok := userdata["base64_fields"]
if !ok {
return false
}
for _, base64Field := range strings.Split(base64Fields, ",") {
if field == base64Field {
return true
}
}
return false
}

View File

@@ -0,0 +1,152 @@
package cloudsigma
import (
"encoding/json"
"reflect"
"testing"
)
type fakeCepgoClient struct {
raw []byte
meta map[string]string
keys map[string]interface{}
err error
}
func (f *fakeCepgoClient) All() (interface{}, error) {
return f.keys, f.err
}
func (f *fakeCepgoClient) Key(key string) (interface{}, error) {
return f.keys[key], f.err
}
func (f *fakeCepgoClient) Meta() (map[string]string, error) {
return f.meta, f.err
}
func (f *fakeCepgoClient) FetchRaw(key string) ([]byte, error) {
return f.raw, f.err
}
func TestServerContextFetchMetadata(t *testing.T) {
var metadata struct {
Hostname string `json:"name"`
PublicKeys map[string]string `json:"public_keys"`
LocalIPv4 string `json:"local-ipv4"`
PublicIPv4 string `json:"public-ipv4"`
}
client := new(fakeCepgoClient)
scs := NewServerContextService()
scs.client = client
client.raw = []byte(`{
"context": true,
"cpu": 4000,
"cpu_model": null,
"cpus_instead_of_cores": false,
"enable_numa": false,
"grantees": [],
"hv_relaxed": false,
"hv_tsc": false,
"jobs": [],
"mem": 4294967296,
"meta": {
"base64_fields": "cloudinit-user-data",
"cloudinit-user-data": "I2Nsb3VkLWNvbmZpZwoKaG9zdG5hbWU6IGNvcmVvczE=",
"ssh_public_key": "ssh-rsa AAAAB3NzaC1yc2E.../hQ5D5 john@doe"
},
"name": "coreos",
"nics": [
{
"runtime": {
"interface_type": "public",
"ip_v4": {
"uuid": "31.171.251.74"
},
"ip_v6": null
},
"vlan": null
}
],
"smp": 2,
"status": "running",
"uuid": "20a0059b-041e-4d0c-bcc6-9b2852de48b3"
}`)
metadataBytes, err := scs.FetchMetadata()
if err != nil {
t.Error(err.Error())
}
if err := json.Unmarshal(metadataBytes, &metadata); err != nil {
t.Error(err.Error())
}
if metadata.Hostname != "coreos" {
t.Errorf("Hostname is not 'coreos' but %s instead", metadata.Hostname)
}
if metadata.PublicKeys["john@doe"] != "ssh-rsa AAAAB3NzaC1yc2E.../hQ5D5 john@doe" {
t.Error("Public SSH Keys are not being read properly")
}
if metadata.LocalIPv4 != "" {
t.Errorf("Local IP is not empty but %s instead", metadata.LocalIPv4)
}
if metadata.PublicIPv4 != "31.171.251.74" {
t.Errorf("Local IP is not 31.171.251.74 but %s instead", metadata.PublicIPv4)
}
}
func TestServerContextFetchUserdata(t *testing.T) {
client := new(fakeCepgoClient)
scs := NewServerContextService()
scs.client = client
userdataSets := []struct {
in map[string]string
err bool
out []byte
}{
{map[string]string{
"base64_fields": "cloudinit-user-data",
"cloudinit-user-data": "aG9zdG5hbWU6IGNvcmVvc190ZXN0",
}, false, []byte("hostname: coreos_test")},
{map[string]string{
"cloudinit-user-data": "#cloud-config\\nhostname: coreos1",
}, false, []byte("#cloud-config\\nhostname: coreos1")},
{map[string]string{}, false, []byte{}},
}
for i, set := range userdataSets {
client.meta = set.in
got, err := scs.FetchUserdata()
if (err != nil) != set.err {
t.Errorf("case %d: bad error state (got %t, want %t)", i, err != nil, set.err)
}
if !reflect.DeepEqual(got, set.out) {
t.Errorf("case %d: got %s, want %s", i, got, set.out)
}
}
}
func TestServerContextDecodingBase64UserData(t *testing.T) {
base64Sets := []struct {
in string
out bool
}{
{"cloudinit-user-data,foo,bar", true},
{"bar,cloudinit-user-data,foo,bar", true},
{"cloudinit-user-data", true},
{"", false},
{"foo", false},
}
for _, set := range base64Sets {
userdata := map[string]string{"base64_fields": set.in}
if isBase64Encoded("cloudinit-user-data", userdata) != set.out {
t.Errorf("isBase64Encoded(cloudinit-user-data, %s) should be %t", userdata, set.out)
}
}
}

View File

@@ -0,0 +1,107 @@
package digitalocean
import (
"encoding/json"
"strconv"
"github.com/coreos/coreos-cloudinit/datasource/metadata"
)
const (
DefaultAddress = "http://169.254.169.254/"
apiVersion = "metadata/v1"
userdataUrl = apiVersion + "/user-data"
metadataPath = apiVersion + ".json"
)
type Address struct {
IPAddress string `json:"ip_address"`
Netmask string `json:"netmask"`
Cidr int `json:"cidr"`
Gateway string `json:"gateway"`
}
type Interface struct {
IPv4 *Address `json:"ipv4"`
IPv6 *Address `json:"ipv6"`
MAC string `json:"mac"`
Type string `json:"type"`
}
type Interfaces struct {
Public []Interface `json:"public"`
Private []Interface `json:"private"`
}
type DNS struct {
Nameservers []string `json:"nameservers"`
}
type Metadata struct {
Hostname string `json:"hostname"`
Interfaces Interfaces `json:"interfaces"`
PublicKeys []string `json:"public_keys"`
DNS DNS `json:"dns"`
}
type metadataService struct {
interfaces Interfaces
dns DNS
metadata.MetadataService
}
func NewDatasource(root string) *metadataService {
return &metadataService{MetadataService: metadata.NewDatasource(root, apiVersion, userdataUrl, metadataPath)}
}
func (ms *metadataService) FetchMetadata() ([]byte, error) {
data, err := ms.FetchData(ms.MetadataUrl())
if err != nil || len(data) == 0 {
return []byte{}, err
}
var metadata Metadata
if err := json.Unmarshal(data, &metadata); err != nil {
return []byte{}, err
}
ms.interfaces = metadata.Interfaces
ms.dns = metadata.DNS
attrs := make(map[string]interface{})
if len(metadata.Interfaces.Public) > 0 {
if metadata.Interfaces.Public[0].IPv4 != nil {
attrs["public-ipv4"] = metadata.Interfaces.Public[0].IPv4.IPAddress
}
if metadata.Interfaces.Public[0].IPv6 != nil {
attrs["public-ipv6"] = metadata.Interfaces.Public[0].IPv6.IPAddress
}
}
if len(metadata.Interfaces.Private) > 0 {
if metadata.Interfaces.Private[0].IPv4 != nil {
attrs["local-ipv4"] = metadata.Interfaces.Private[0].IPv4.IPAddress
}
if metadata.Interfaces.Private[0].IPv6 != nil {
attrs["local-ipv6"] = metadata.Interfaces.Private[0].IPv6.IPAddress
}
}
attrs["hostname"] = metadata.Hostname
keys := make(map[string]string)
for i, key := range metadata.PublicKeys {
keys[strconv.Itoa(i)] = key
}
attrs["public_keys"] = keys
return json.Marshal(attrs)
}
func (ms metadataService) FetchNetworkConfig(filename string) ([]byte, error) {
return json.Marshal(Metadata{
Interfaces: ms.interfaces,
DNS: ms.dns,
})
}
func (ms metadataService) Type() string {
return "digitalocean-metadata-service"
}

View File

@@ -0,0 +1,99 @@
package digitalocean
import (
"bytes"
"fmt"
"testing"
"github.com/coreos/coreos-cloudinit/datasource/metadata"
"github.com/coreos/coreos-cloudinit/datasource/metadata/test"
"github.com/coreos/coreos-cloudinit/pkg"
)
func TestType(t *testing.T) {
want := "digitalocean-metadata-service"
if kind := (metadataService{}).Type(); kind != want {
t.Fatalf("bad type: want %q, got %q", want, kind)
}
}
func TestFetchMetadata(t *testing.T) {
for _, tt := range []struct {
root string
metadataPath string
resources map[string]string
expect []byte
clientErr error
expectErr error
}{
{
root: "/",
metadataPath: "v1.json",
resources: map[string]string{
"/v1.json": "bad",
},
expectErr: fmt.Errorf("invalid character 'b' looking for beginning of value"),
},
{
root: "/",
metadataPath: "v1.json",
resources: map[string]string{
"/v1.json": `{
"droplet_id": 1,
"user_data": "hello",
"vendor_data": "hello",
"public_keys": [
"publickey1",
"publickey2"
],
"region": "nyc2",
"interfaces": {
"public": [
{
"ipv4": {
"ip_address": "192.168.1.2",
"netmask": "255.255.255.0",
"gateway": "192.168.1.1"
},
"ipv6": {
"ip_address": "fe00::",
"cidr": 126,
"gateway": "fe00::"
},
"mac": "ab:cd:ef:gh:ij",
"type": "public"
}
]
}
}`,
},
expect: []byte(`{"hostname":"","public-ipv4":"192.168.1.2","public-ipv6":"fe00::","public_keys":{"0":"publickey1","1":"publickey2"}}`),
},
{
clientErr: pkg.ErrTimeout{fmt.Errorf("test error")},
expectErr: pkg.ErrTimeout{fmt.Errorf("test error")},
},
} {
service := &metadataService{
MetadataService: metadata.MetadataService{
Root: tt.root,
Client: &test.HttpClient{tt.resources, tt.clientErr},
MetadataPath: tt.metadataPath,
},
}
metadata, err := service.FetchMetadata()
if Error(err) != Error(tt.expectErr) {
t.Fatalf("bad error (%q): want %q, got %q", tt.resources, tt.expectErr, err)
}
if !bytes.Equal(metadata, tt.expect) {
t.Fatalf("bad fetch (%q): want %q, got %q", tt.resources, tt.expect, metadata)
}
}
}
func Error(err error) string {
if err != nil {
return err.Error()
}
return ""
}

View File

@@ -0,0 +1,107 @@
package ec2
import (
"bufio"
"bytes"
"encoding/json"
"fmt"
"strings"
"github.com/coreos/coreos-cloudinit/datasource/metadata"
"github.com/coreos/coreos-cloudinit/pkg"
)
const (
DefaultAddress = "http://169.254.169.254/"
apiVersion = "2009-04-04"
userdataPath = apiVersion + "/user-data"
metadataPath = apiVersion + "/meta-data"
)
type metadataService struct {
metadata.MetadataService
}
func NewDatasource(root string) *metadataService {
return &metadataService{metadata.NewDatasource(root, apiVersion, userdataPath, metadataPath)}
}
func (ms metadataService) FetchMetadata() ([]byte, error) {
attrs := make(map[string]interface{})
if keynames, err := ms.fetchAttributes(fmt.Sprintf("%s/public-keys", ms.MetadataUrl())); err == nil {
keyIDs := make(map[string]string)
for _, keyname := range keynames {
tokens := strings.SplitN(keyname, "=", 2)
if len(tokens) != 2 {
return nil, fmt.Errorf("malformed public key: %q", keyname)
}
keyIDs[tokens[1]] = tokens[0]
}
keys := make(map[string]string)
for name, id := range keyIDs {
sshkey, err := ms.fetchAttribute(fmt.Sprintf("%s/public-keys/%s/openssh-key", ms.MetadataUrl(), id))
if err != nil {
return nil, err
}
keys[name] = sshkey
fmt.Printf("Found SSH key for %q\n", name)
}
attrs["public_keys"] = keys
} else if _, ok := err.(pkg.ErrNotFound); !ok {
return nil, err
}
if hostname, err := ms.fetchAttribute(fmt.Sprintf("%s/hostname", ms.MetadataUrl())); err == nil {
attrs["hostname"] = hostname
} else if _, ok := err.(pkg.ErrNotFound); !ok {
return nil, err
}
if localAddr, err := ms.fetchAttribute(fmt.Sprintf("%s/local-ipv4", ms.MetadataUrl())); err == nil {
attrs["local-ipv4"] = localAddr
} else if _, ok := err.(pkg.ErrNotFound); !ok {
return nil, err
}
if publicAddr, err := ms.fetchAttribute(fmt.Sprintf("%s/public-ipv4", ms.MetadataUrl())); err == nil {
attrs["public-ipv4"] = publicAddr
} else if _, ok := err.(pkg.ErrNotFound); !ok {
return nil, err
}
if content_path, err := ms.fetchAttribute(fmt.Sprintf("%s/network_config/content_path", ms.MetadataUrl())); err == nil {
attrs["network_config"] = map[string]string{
"content_path": content_path,
}
} else if _, ok := err.(pkg.ErrNotFound); !ok {
return nil, err
}
return json.Marshal(attrs)
}
func (ms metadataService) Type() string {
return "ec2-metadata-service"
}
func (ms metadataService) fetchAttributes(url string) ([]string, error) {
resp, err := ms.FetchData(url)
if err != nil {
return nil, err
}
scanner := bufio.NewScanner(bytes.NewBuffer(resp))
data := make([]string, 0)
for scanner.Scan() {
data = append(data, scanner.Text())
}
return data, scanner.Err()
}
func (ms metadataService) fetchAttribute(url string) (string, error) {
if attrs, err := ms.fetchAttributes(url); err == nil && len(attrs) > 0 {
return attrs[0], nil
} else {
return "", err
}
}

View File

@@ -0,0 +1,185 @@
package ec2
import (
"bytes"
"fmt"
"reflect"
"testing"
"github.com/coreos/coreos-cloudinit/datasource/metadata"
"github.com/coreos/coreos-cloudinit/datasource/metadata/test"
"github.com/coreos/coreos-cloudinit/pkg"
)
func TestType(t *testing.T) {
want := "ec2-metadata-service"
if kind := (metadataService{}).Type(); kind != want {
t.Fatalf("bad type: want %q, got %q", want, kind)
}
}
func TestFetchAttributes(t *testing.T) {
for _, s := range []struct {
resources map[string]string
err error
tests []struct {
path string
val []string
}
}{
{
resources: map[string]string{
"/": "a\nb\nc/",
"/c/": "d\ne/",
"/c/e/": "f",
"/a": "1",
"/b": "2",
"/c/d": "3",
"/c/e/f": "4",
},
tests: []struct {
path string
val []string
}{
{"/", []string{"a", "b", "c/"}},
{"/b", []string{"2"}},
{"/c/d", []string{"3"}},
{"/c/e/", []string{"f"}},
},
},
{
err: fmt.Errorf("test error"),
tests: []struct {
path string
val []string
}{
{"", nil},
},
},
} {
service := metadataService{metadata.MetadataService{
Client: &test.HttpClient{s.resources, s.err},
}}
for _, tt := range s.tests {
attrs, err := service.fetchAttributes(tt.path)
if err != s.err {
t.Fatalf("bad error for %q (%q): want %q, got %q", tt.path, s.resources, s.err, err)
}
if !reflect.DeepEqual(attrs, tt.val) {
t.Fatalf("bad fetch for %q (%q): want %q, got %q", tt.path, s.resources, tt.val, attrs)
}
}
}
}
func TestFetchAttribute(t *testing.T) {
for _, s := range []struct {
resources map[string]string
err error
tests []struct {
path string
val string
}
}{
{
resources: map[string]string{
"/": "a\nb\nc/",
"/c/": "d\ne/",
"/c/e/": "f",
"/a": "1",
"/b": "2",
"/c/d": "3",
"/c/e/f": "4",
},
tests: []struct {
path string
val string
}{
{"/a", "1"},
{"/b", "2"},
{"/c/d", "3"},
{"/c/e/f", "4"},
},
},
{
err: fmt.Errorf("test error"),
tests: []struct {
path string
val string
}{
{"", ""},
},
},
} {
service := metadataService{metadata.MetadataService{
Client: &test.HttpClient{s.resources, s.err},
}}
for _, tt := range s.tests {
attr, err := service.fetchAttribute(tt.path)
if err != s.err {
t.Fatalf("bad error for %q (%q): want %q, got %q", tt.path, s.resources, s.err, err)
}
if attr != tt.val {
t.Fatalf("bad fetch for %q (%q): want %q, got %q", tt.path, s.resources, tt.val, attr)
}
}
}
}
func TestFetchMetadata(t *testing.T) {
for _, tt := range []struct {
root string
metadataPath string
resources map[string]string
expect []byte
clientErr error
expectErr error
}{
{
root: "/",
metadataPath: "2009-04-04/meta-data",
resources: map[string]string{
"/2009-04-04/meta-data/public-keys": "bad\n",
},
expectErr: fmt.Errorf("malformed public key: \"bad\""),
},
{
root: "/",
metadataPath: "2009-04-04/meta-data",
resources: map[string]string{
"/2009-04-04/meta-data/hostname": "host",
"/2009-04-04/meta-data/local-ipv4": "1.2.3.4",
"/2009-04-04/meta-data/public-ipv4": "5.6.7.8",
"/2009-04-04/meta-data/public-keys": "0=test1\n",
"/2009-04-04/meta-data/public-keys/0": "openssh-key",
"/2009-04-04/meta-data/public-keys/0/openssh-key": "key",
"/2009-04-04/meta-data/network_config/content_path": "path",
},
expect: []byte(`{"hostname":"host","local-ipv4":"1.2.3.4","network_config":{"content_path":"path"},"public-ipv4":"5.6.7.8","public_keys":{"test1":"key"}}`),
},
{
clientErr: pkg.ErrTimeout{fmt.Errorf("test error")},
expectErr: pkg.ErrTimeout{fmt.Errorf("test error")},
},
} {
service := &metadataService{metadata.MetadataService{
Root: tt.root,
Client: &test.HttpClient{tt.resources, tt.clientErr},
MetadataPath: tt.metadataPath,
}}
metadata, err := service.FetchMetadata()
if Error(err) != Error(tt.expectErr) {
t.Fatalf("bad error (%q): want %q, got %q", tt.resources, tt.expectErr, err)
}
if !bytes.Equal(metadata, tt.expect) {
t.Fatalf("bad fetch (%q): want %q, got %q", tt.resources, tt.expect, metadata)
}
}
}
func Error(err error) string {
if err != nil {
return err.Error()
}
return ""
}

View File

@@ -0,0 +1,61 @@
package metadata
import (
"strings"
"github.com/coreos/coreos-cloudinit/pkg"
)
type MetadataService struct {
Root string
Client pkg.Getter
ApiVersion string
UserdataPath string
MetadataPath string
}
func NewDatasource(root, apiVersion, userdataPath, metadataPath string) MetadataService {
if !strings.HasSuffix(root, "/") {
root += "/"
}
return MetadataService{root, pkg.NewHttpClient(), apiVersion, userdataPath, metadataPath}
}
func (ms MetadataService) IsAvailable() bool {
_, err := ms.Client.Get(ms.Root + ms.ApiVersion)
return (err == nil)
}
func (ms MetadataService) AvailabilityChanges() bool {
return true
}
func (ms MetadataService) ConfigRoot() string {
return ms.Root
}
func (ms MetadataService) FetchUserdata() ([]byte, error) {
return ms.FetchData(ms.UserdataUrl())
}
func (ms MetadataService) FetchNetworkConfig(filename string) ([]byte, error) {
return nil, nil
}
func (ms MetadataService) FetchData(url string) ([]byte, error) {
if data, err := ms.Client.GetRetry(url); err == nil {
return data, err
} else if _, ok := err.(pkg.ErrNotFound); ok {
return []byte{}, nil
} else {
return data, err
}
}
func (ms MetadataService) MetadataUrl() string {
return (ms.Root + ms.MetadataPath)
}
func (ms MetadataService) UserdataUrl() string {
return (ms.Root + ms.UserdataPath)
}

View File

@@ -0,0 +1,171 @@
package metadata
import (
"bytes"
"fmt"
"testing"
"github.com/coreos/coreos-cloudinit/datasource/metadata/test"
"github.com/coreos/coreos-cloudinit/pkg"
)
func TestAvailabilityChanges(t *testing.T) {
want := true
if ac := (MetadataService{}).AvailabilityChanges(); ac != want {
t.Fatalf("bad AvailabilityChanges: want %q, got %q", want, ac)
}
}
func TestIsAvailable(t *testing.T) {
for _, tt := range []struct {
root string
apiVersion string
resources map[string]string
expect bool
}{
{
root: "/",
apiVersion: "2009-04-04",
resources: map[string]string{
"/2009-04-04": "",
},
expect: true,
},
{
root: "/",
resources: map[string]string{},
expect: false,
},
} {
service := &MetadataService{
Root: tt.root,
Client: &test.HttpClient{tt.resources, nil},
ApiVersion: tt.apiVersion,
}
if a := service.IsAvailable(); a != tt.expect {
t.Fatalf("bad isAvailable (%q): want %q, got %q", tt.resources, tt.expect, a)
}
}
}
func TestFetchUserdata(t *testing.T) {
for _, tt := range []struct {
root string
userdataPath string
resources map[string]string
userdata []byte
clientErr error
expectErr error
}{
{
root: "/",
userdataPath: "2009-04-04/user-data",
resources: map[string]string{
"/2009-04-04/user-data": "hello",
},
userdata: []byte("hello"),
},
{
root: "/",
clientErr: pkg.ErrNotFound{fmt.Errorf("test not found error")},
userdata: []byte{},
},
{
root: "/",
clientErr: pkg.ErrTimeout{fmt.Errorf("test timeout error")},
expectErr: pkg.ErrTimeout{fmt.Errorf("test timeout error")},
},
} {
service := &MetadataService{
Root: tt.root,
Client: &test.HttpClient{tt.resources, tt.clientErr},
UserdataPath: tt.userdataPath,
}
data, err := service.FetchUserdata()
if Error(err) != Error(tt.expectErr) {
t.Fatalf("bad error (%q): want %q, got %q", tt.resources, tt.expectErr, err)
}
if !bytes.Equal(data, tt.userdata) {
t.Fatalf("bad userdata (%q): want %q, got %q", tt.resources, tt.userdata, data)
}
}
}
func TestUrls(t *testing.T) {
for _, tt := range []struct {
root string
userdataPath string
metadataPath string
expectRoot string
userdata string
metadata string
}{
{
root: "/",
userdataPath: "2009-04-04/user-data",
metadataPath: "2009-04-04/meta-data",
expectRoot: "/",
userdata: "/2009-04-04/user-data",
metadata: "/2009-04-04/meta-data",
},
{
root: "http://169.254.169.254/",
userdataPath: "2009-04-04/user-data",
metadataPath: "2009-04-04/meta-data",
expectRoot: "http://169.254.169.254/",
userdata: "http://169.254.169.254/2009-04-04/user-data",
metadata: "http://169.254.169.254/2009-04-04/meta-data",
},
} {
service := &MetadataService{
Root: tt.root,
UserdataPath: tt.userdataPath,
MetadataPath: tt.metadataPath,
}
if url := service.UserdataUrl(); url != tt.userdata {
t.Fatalf("bad url (%q): want %q, got %q", tt.root, tt.userdata, url)
}
if url := service.MetadataUrl(); url != tt.metadata {
t.Fatalf("bad url (%q): want %q, got %q", tt.root, tt.metadata, url)
}
if url := service.ConfigRoot(); url != tt.expectRoot {
t.Fatalf("bad url (%q): want %q, got %q", tt.root, tt.expectRoot, url)
}
}
}
func TestNewDatasource(t *testing.T) {
for _, tt := range []struct {
root string
expectRoot string
}{
{
root: "",
expectRoot: "/",
},
{
root: "/",
expectRoot: "/",
},
{
root: "http://169.254.169.254",
expectRoot: "http://169.254.169.254/",
},
{
root: "http://169.254.169.254/",
expectRoot: "http://169.254.169.254/",
},
} {
service := NewDatasource(tt.root, "", "", "")
if service.Root != tt.expectRoot {
t.Fatalf("bad root (%q): want %q, got %q", tt.root, tt.expectRoot, service.Root)
}
}
}
func Error(err error) string {
if err != nil {
return err.Error()
}
return ""
}

View File

@@ -0,0 +1,27 @@
package test
import (
"fmt"
"github.com/coreos/coreos-cloudinit/pkg"
)
type HttpClient struct {
Resources map[string]string
Err error
}
func (t *HttpClient) GetRetry(url string) ([]byte, error) {
if t.Err != nil {
return nil, t.Err
}
if val, ok := t.Resources[url]; ok {
return []byte(val), nil
} else {
return nil, pkg.ErrNotFound{fmt.Errorf("not found: %q", url)}
}
}
func (t *HttpClient) Get(url string) ([]byte, error) {
return t.GetRetry(url)
}

View File

@@ -1,155 +0,0 @@
package datasource
import (
"bufio"
"bytes"
"encoding/json"
"fmt"
"strings"
"github.com/coreos/coreos-cloudinit/pkg"
)
// metadataService retrieves metadata from either an OpenStack[1] (2012-08-10)
// or EC2[2] (2009-04-04) compatible endpoint. It will first attempt to
// directly retrieve a JSON blob from the OpenStack endpoint. If that fails
// with a 404, it then attempts to retrieve metadata bit-by-bit from the EC2
// endpoint, and populates that into an equivalent JSON blob. metadataService
// also checks for userdata from EC2 and, if that fails with a 404, OpenStack.
//
// [1] http://docs.openstack.org/grizzly/openstack-compute/admin/content/metadata-service.html
// [2] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html#instancedata-data-categories
const (
BaseUrl = "http://169.254.169.254/"
Ec2ApiVersion = "2009-04-04"
Ec2UserdataUrl = BaseUrl + Ec2ApiVersion + "/user-data"
Ec2MetadataUrl = BaseUrl + Ec2ApiVersion + "/meta-data"
OpenstackApiVersion = "openstack/2012-08-10"
OpenstackUserdataUrl = BaseUrl + OpenstackApiVersion + "/user_data"
)
type metadataService struct{}
type getter interface {
GetRetry(string) ([]byte, error)
}
func NewMetadataService() *metadataService {
return &metadataService{}
}
func (ms *metadataService) IsAvailable() bool {
client := pkg.NewHttpClient()
_, err := client.Get(BaseUrl)
return (err == nil)
}
func (ms *metadataService) AvailabilityChanges() bool {
return true
}
func (ms *metadataService) ConfigRoot() string {
return ""
}
func (ms *metadataService) FetchMetadata() ([]byte, error) {
return fetchMetadata(pkg.NewHttpClient())
}
func (ms *metadataService) FetchUserdata() ([]byte, error) {
client := pkg.NewHttpClient()
if data, err := client.GetRetry(Ec2UserdataUrl); err == nil {
return data, err
} else if _, ok := err.(pkg.ErrTimeout); ok {
return data, err
}
if data, err := client.GetRetry(OpenstackUserdataUrl); err == nil {
return data, err
} else if _, ok := err.(pkg.ErrNotFound); ok {
return []byte{}, nil
} else {
return data, err
}
}
func (ms *metadataService) Type() string {
return "metadata-service"
}
func fetchMetadata(client getter) ([]byte, error) {
attrs := make(map[string]interface{})
if keynames, err := fetchAttributes(client, fmt.Sprintf("%s/public-keys", Ec2MetadataUrl)); err == nil {
keyIDs := make(map[string]string)
for _, keyname := range keynames {
tokens := strings.SplitN(keyname, "=", 2)
if len(tokens) != 2 {
return nil, fmt.Errorf("malformed public key: %q\n", keyname)
}
keyIDs[tokens[1]] = tokens[0]
}
keys := make(map[string]string)
for name, id := range keyIDs {
sshkey, err := fetchAttribute(client, fmt.Sprintf("%s/public-keys/%s/openssh-key", Ec2MetadataUrl, id))
if err != nil {
return nil, err
}
keys[name] = sshkey
fmt.Printf("Found SSH key for %q\n", name)
}
attrs["public_keys"] = keys
} else if _, ok := err.(pkg.ErrNotFound); !ok {
return nil, err
}
if hostname, err := fetchAttribute(client, fmt.Sprintf("%s/hostname", Ec2MetadataUrl)); err == nil {
attrs["hostname"] = hostname
} else if _, ok := err.(pkg.ErrNotFound); !ok {
return nil, err
}
if localAddr, err := fetchAttribute(client, fmt.Sprintf("%s/local-ipv4", Ec2MetadataUrl)); err == nil {
attrs["local-ipv4"] = localAddr
} else if _, ok := err.(pkg.ErrNotFound); !ok {
return nil, err
}
if publicAddr, err := fetchAttribute(client, fmt.Sprintf("%s/public-ipv4", Ec2MetadataUrl)); err == nil {
attrs["public-ipv4"] = publicAddr
} else if _, ok := err.(pkg.ErrNotFound); !ok {
return nil, err
}
if content_path, err := fetchAttribute(client, fmt.Sprintf("%s/network_config/content_path", Ec2MetadataUrl)); err == nil {
attrs["network_config"] = map[string]string{
"content_path": content_path,
}
} else if _, ok := err.(pkg.ErrNotFound); !ok {
return nil, err
}
return json.Marshal(attrs)
}
func fetchAttributes(client getter, url string) ([]string, error) {
resp, err := client.GetRetry(url)
if err != nil {
return nil, err
}
scanner := bufio.NewScanner(bytes.NewBuffer(resp))
data := make([]string, 0)
for scanner.Scan() {
data = append(data, scanner.Text())
}
return data, scanner.Err()
}
func fetchAttribute(client getter, url string) (string, error) {
if attrs, err := fetchAttributes(client, url); err == nil && len(attrs) > 0 {
return attrs[0], nil
} else {
return "", err
}
}

View File

@@ -1,159 +0,0 @@
package datasource
import (
"bytes"
"fmt"
"reflect"
"testing"
"github.com/coreos/coreos-cloudinit/pkg"
)
type TestHttpClient struct {
metadata map[string]string
err error
}
func (t *TestHttpClient) GetRetry(url string) ([]byte, error) {
if t.err != nil {
return nil, t.err
}
if val, ok := t.metadata[url]; ok {
return []byte(val), nil
} else {
return nil, pkg.ErrNotFound{fmt.Errorf("not found: %q", url)}
}
}
func TestFetchAttributes(t *testing.T) {
for _, s := range []struct {
metadata map[string]string
err error
tests []struct {
path string
val []string
}
}{
{
metadata: map[string]string{
"/": "a\nb\nc/",
"/c/": "d\ne/",
"/c/e/": "f",
"/a": "1",
"/b": "2",
"/c/d": "3",
"/c/e/f": "4",
},
tests: []struct {
path string
val []string
}{
{"/", []string{"a", "b", "c/"}},
{"/b", []string{"2"}},
{"/c/d", []string{"3"}},
{"/c/e/", []string{"f"}},
},
},
{
err: pkg.ErrNotFound{fmt.Errorf("test error")},
tests: []struct {
path string
val []string
}{
{"", nil},
},
},
} {
client := &TestHttpClient{s.metadata, s.err}
for _, tt := range s.tests {
attrs, err := fetchAttributes(client, tt.path)
if err != s.err {
t.Fatalf("bad error for %q (%q): want %q, got %q", tt.path, s.metadata, s.err, err)
}
if !reflect.DeepEqual(attrs, tt.val) {
t.Fatalf("bad fetch for %q (%q): want %q, got %q", tt.path, s.metadata, tt.val, attrs)
}
}
}
}
func TestFetchAttribute(t *testing.T) {
for _, s := range []struct {
metadata map[string]string
err error
tests []struct {
path string
val string
}
}{
{
metadata: map[string]string{
"/": "a\nb\nc/",
"/c/": "d\ne/",
"/c/e/": "f",
"/a": "1",
"/b": "2",
"/c/d": "3",
"/c/e/f": "4",
},
tests: []struct {
path string
val string
}{
{"/a", "1"},
{"/b", "2"},
{"/c/d", "3"},
{"/c/e/f", "4"},
},
},
{
err: pkg.ErrNotFound{fmt.Errorf("test error")},
tests: []struct {
path string
val string
}{
{"", ""},
},
},
} {
client := &TestHttpClient{s.metadata, s.err}
for _, tt := range s.tests {
attr, err := fetchAttribute(client, tt.path)
if err != s.err {
t.Fatalf("bad error for %q (%q): want %q, got %q", tt.path, s.metadata, s.err, err)
}
if attr != tt.val {
t.Fatalf("bad fetch for %q (%q): want %q, got %q", tt.path, s.metadata, tt.val, attr)
}
}
}
}
func TestFetchMetadata(t *testing.T) {
for _, tt := range []struct {
metadata map[string]string
err error
expect []byte
}{
{
metadata: map[string]string{
"http://169.254.169.254/2009-04-04/meta-data/hostname": "host",
"http://169.254.169.254/2009-04-04/meta-data/public-keys": "0=test1\n",
"http://169.254.169.254/2009-04-04/meta-data/public-keys/0": "openssh-key",
"http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key": "key",
"http://169.254.169.254/2009-04-04/meta-data/network_config/content_path": "path",
},
expect: []byte(`{"hostname":"host","network_config":{"content_path":"path"},"public_keys":{"test1":"key"}}`),
},
{err: pkg.ErrTimeout{fmt.Errorf("test error")}},
} {
client := &TestHttpClient{tt.metadata, tt.err}
metadata, err := fetchMetadata(client)
if err != tt.err {
t.Fatalf("bad error (%q): want %q, got %q", tt.metadata, tt.err, err)
}
if !bytes.Equal(metadata, tt.expect) {
t.Fatalf("bad fetch (%q): want %q, got %q", tt.metadata, tt.expect, metadata)
}
}
}

View File

@@ -1,4 +1,4 @@
package datasource package proc_cmdline
import ( import (
"errors" "errors"
@@ -18,7 +18,7 @@ type procCmdline struct {
Location string Location string
} }
func NewProcCmdline() *procCmdline { func NewDatasource() *procCmdline {
return &procCmdline{Location: ProcCmdlineLocation} return &procCmdline{Location: ProcCmdlineLocation}
} }
@@ -66,6 +66,10 @@ func (c *procCmdline) FetchUserdata() ([]byte, error) {
return cfg, nil return cfg, nil
} }
func (c *procCmdline) FetchNetworkConfig(filename string) ([]byte, error) {
return nil, nil
}
func (c *procCmdline) Type() string { func (c *procCmdline) Type() string {
return "proc-cmdline" return "proc-cmdline"
} }

View File

@@ -1,4 +1,4 @@
package datasource package proc_cmdline
import ( import (
"fmt" "fmt"
@@ -75,7 +75,7 @@ func TestProcCmdlineAndFetchConfig(t *testing.T) {
t.Errorf("Test produced error: %v", err) t.Errorf("Test produced error: %v", err)
} }
p := NewProcCmdline() p := NewDatasource()
p.Location = file.Name() p.Location = file.Name()
cfg, err := p.FetchUserdata() cfg, err := p.FetchUserdata()
if err != nil { if err != nil {

View File

@@ -1,12 +1,14 @@
package datasource package url
import "github.com/coreos/coreos-cloudinit/pkg" import (
"github.com/coreos/coreos-cloudinit/pkg"
)
type remoteFile struct { type remoteFile struct {
url string url string
} }
func NewRemoteFile(url string) *remoteFile { func NewDatasource(url string) *remoteFile {
return &remoteFile{url} return &remoteFile{url}
} }
@@ -33,6 +35,10 @@ func (f *remoteFile) FetchUserdata() ([]byte, error) {
return client.GetRetry(f.url) return client.GetRetry(f.url)
} }
func (f *remoteFile) FetchNetworkConfig(filename string) ([]byte, error) {
return nil, nil
}
func (f *remoteFile) Type() string { func (f *remoteFile) Type() string {
return "url" return "url"
} }

View File

@@ -3,11 +3,10 @@ package initialize
import ( import (
"errors" "errors"
"fmt" "fmt"
"io/ioutil"
"log" "log"
"path" "path"
"github.com/coreos/coreos-cloudinit/third_party/launchpad.net/goyaml" "github.com/coreos/coreos-cloudinit/third_party/gopkg.in/yaml.v1"
"github.com/coreos/coreos-cloudinit/network" "github.com/coreos/coreos-cloudinit/network"
"github.com/coreos/coreos-cloudinit/system" "github.com/coreos/coreos-cloudinit/system"
@@ -42,6 +41,7 @@ type CloudConfig struct {
Users []system.User Users []system.User
ManageEtcHosts EtcHosts `yaml:"manage_etc_hosts"` ManageEtcHosts EtcHosts `yaml:"manage_etc_hosts"`
NetworkConfigPath string NetworkConfigPath string
NetworkConfig string
} }
type warner func(format string, v ...interface{}) type warner func(format string, v ...interface{})
@@ -51,12 +51,12 @@ type warner func(format string, v ...interface{})
func warnOnUnrecognizedKeys(contents string, warn warner) { func warnOnUnrecognizedKeys(contents string, warn warner) {
// Generate a map of all understood cloud config options // Generate a map of all understood cloud config options
var cc map[string]interface{} var cc map[string]interface{}
b, _ := goyaml.Marshal(&CloudConfig{}) b, _ := yaml.Marshal(&CloudConfig{})
goyaml.Unmarshal(b, &cc) yaml.Unmarshal(b, &cc)
// Now unmarshal the entire provided contents // Now unmarshal the entire provided contents
var c map[string]interface{} var c map[string]interface{}
goyaml.Unmarshal([]byte(contents), &c) yaml.Unmarshal([]byte(contents), &c)
// Check that every key in the contents exists in the cloud config // Check that every key in the contents exists in the cloud config
for k, _ := range c { for k, _ := range c {
@@ -66,52 +66,63 @@ func warnOnUnrecognizedKeys(contents string, warn warner) {
} }
// Check for unrecognized coreos options, if any are set // Check for unrecognized coreos options, if any are set
coreos, ok := c["coreos"] if coreos, ok := c["coreos"]; ok {
if ok { if set, ok := coreos.(map[interface{}]interface{}); ok {
set := coreos.(map[interface{}]interface{})
known := cc["coreos"].(map[interface{}]interface{}) known := cc["coreos"].(map[interface{}]interface{})
for k, _ := range set { for k, _ := range set {
key := k.(string) if key, ok := k.(string); ok {
if _, ok := known[key]; !ok { if _, ok := known[key]; !ok {
warn("Warning: unrecognized key %q in coreos section of provided cloud config - ignoring", key) warn("Warning: unrecognized key %q in coreos section of provided cloud config - ignoring", key)
} }
} else {
warn("Warning: unrecognized key %q in coreos section of provided cloud config - ignoring", k)
}
}
} }
} }
// Check for any badly-specified users, if any are set // Check for any badly-specified users, if any are set
users, ok := c["users"] if users, ok := c["users"]; ok {
if ok {
var known map[string]interface{} var known map[string]interface{}
b, _ := goyaml.Marshal(&system.User{}) b, _ := yaml.Marshal(&system.User{})
goyaml.Unmarshal(b, &known) yaml.Unmarshal(b, &known)
set := users.([]interface{}) if set, ok := users.([]interface{}); ok {
for _, u := range set { for _, u := range set {
user := u.(map[interface{}]interface{}) if user, ok := u.(map[interface{}]interface{}); ok {
for k, _ := range user { for k, _ := range user {
key := k.(string) if key, ok := k.(string); ok {
if _, ok := known[key]; !ok { if _, ok := known[key]; !ok {
warn("Warning: unrecognized key %q in user section of cloud config - ignoring", key) warn("Warning: unrecognized key %q in user section of cloud config - ignoring", key)
} }
} else {
warn("Warning: unrecognized key %q in user section of cloud config - ignoring", k)
}
}
}
} }
} }
} }
// Check for any badly-specified files, if any are set // Check for any badly-specified files, if any are set
files, ok := c["write_files"] if files, ok := c["write_files"]; ok {
if ok {
var known map[string]interface{} var known map[string]interface{}
b, _ := goyaml.Marshal(&system.File{}) b, _ := yaml.Marshal(&system.File{})
goyaml.Unmarshal(b, &known) yaml.Unmarshal(b, &known)
set := files.([]interface{}) if set, ok := files.([]interface{}); ok {
for _, f := range set { for _, f := range set {
file := f.(map[interface{}]interface{}) if file, ok := f.(map[interface{}]interface{}); ok {
for k, _ := range file { for k, _ := range file {
key := k.(string) if key, ok := k.(string); ok {
if _, ok := known[key]; !ok { if _, ok := known[key]; !ok {
warn("Warning: unrecognized key %q in file section of cloud config - ignoring", key) warn("Warning: unrecognized key %q in file section of cloud config - ignoring", key)
} }
} else {
warn("Warning: unrecognized key %q in file section of cloud config - ignoring", k)
}
}
}
} }
} }
} }
@@ -122,7 +133,7 @@ func warnOnUnrecognizedKeys(contents string, warn warner) {
// fields but log encountering them. // fields but log encountering them.
func NewCloudConfig(contents string) (*CloudConfig, error) { func NewCloudConfig(contents string) (*CloudConfig, error) {
var cfg CloudConfig var cfg CloudConfig
err := goyaml.Unmarshal([]byte(contents), &cfg) err := yaml.Unmarshal([]byte(contents), &cfg)
if err != nil { if err != nil {
return &cfg, err return &cfg, err
} }
@@ -131,7 +142,7 @@ func NewCloudConfig(contents string) (*CloudConfig, error) {
} }
func (cc CloudConfig) String() string { func (cc CloudConfig) String() string {
bytes, err := goyaml.Marshal(cc) bytes, err := yaml.Marshal(cc)
if err != nil { if err != nil {
return "" return ""
} }
@@ -247,15 +258,13 @@ func Apply(cfg CloudConfig, env *Environment) error {
} }
if env.NetconfType() != "" { if env.NetconfType() != "" {
netconfBytes, err := ioutil.ReadFile(path.Join(env.ConfigRoot(), cfg.NetworkConfigPath))
if err != nil {
return err
}
var interfaces []network.InterfaceGenerator var interfaces []network.InterfaceGenerator
var err error
switch env.NetconfType() { switch env.NetconfType() {
case "debian": case "debian":
interfaces, err = network.ProcessDebianNetconf(string(netconfBytes)) interfaces, err = network.ProcessDebianNetconf(cfg.NetworkConfig)
case "digitalocean":
interfaces, err = network.ProcessDigitalOceanNetconf(cfg.NetworkConfig)
default: default:
return fmt.Errorf("Unsupported network config format %q", env.NetconfType()) return fmt.Errorf("Unsupported network config format %q", env.NetconfType())
} }
@@ -272,13 +281,27 @@ func Apply(cfg CloudConfig, env *Environment) error {
} }
} }
commands := make(map[string]string, 0) um := system.NewUnitManager(env.Root())
return processUnits(cfg.Coreos.Units, env.Root(), um)
}
// processUnits takes a set of Units and applies them to the given root using
// the given UnitManager. This can involve things like writing unit files to
// disk, masking/unmasking units, or invoking systemd
// commands against units. It returns any error encountered.
func processUnits(units []system.Unit, root string, um system.UnitManager) error {
type action struct {
unit string
command string
}
actions := make([]action, 0, len(units))
reload := false reload := false
for _, unit := range cfg.Coreos.Units { for _, unit := range units {
dst := unit.Destination(env.Root()) dst := unit.Destination(root)
if unit.Content != "" { if unit.Content != "" {
log.Printf("Writing unit %s to filesystem at path %s", unit.Name, dst) log.Printf("Writing unit %s to filesystem at path %s", unit.Name, dst)
if err := system.PlaceUnit(&unit, dst); err != nil { if err := um.PlaceUnit(&unit, dst); err != nil {
return err return err
} }
log.Printf("Placed unit %s at %s", unit.Name, dst) log.Printf("Placed unit %s at %s", unit.Name, dst)
@@ -287,12 +310,12 @@ func Apply(cfg CloudConfig, env *Environment) error {
if unit.Mask { if unit.Mask {
log.Printf("Masking unit file %s", unit.Name) log.Printf("Masking unit file %s", unit.Name)
if err := system.MaskUnit(&unit, env.Root()); err != nil { if err := um.MaskUnit(&unit); err != nil {
return err return err
} }
} else if unit.Runtime { } else if unit.Runtime {
log.Printf("Ensuring runtime unit file %s is unmasked", unit.Name) log.Printf("Ensuring runtime unit file %s is unmasked", unit.Name)
if err := system.UnmaskUnit(&unit, env.Root()); err != nil { if err := um.UnmaskUnit(&unit); err != nil {
return err return err
} }
} }
@@ -300,7 +323,7 @@ func Apply(cfg CloudConfig, env *Environment) error {
if unit.Enable { if unit.Enable {
if unit.Group() != "network" { if unit.Group() != "network" {
log.Printf("Enabling unit file %s", unit.Name) log.Printf("Enabling unit file %s", unit.Name)
if err := system.EnableUnitFile(unit.Name, unit.Runtime); err != nil { if err := um.EnableUnitFile(unit.Name, unit.Runtime); err != nil {
return err return err
} }
log.Printf("Enabled unit %s", unit.Name) log.Printf("Enabled unit %s", unit.Name)
@@ -310,25 +333,25 @@ func Apply(cfg CloudConfig, env *Environment) error {
} }
if unit.Group() == "network" { if unit.Group() == "network" {
commands["systemd-networkd.service"] = "restart" actions = append(actions, action{"systemd-networkd.service", "restart"})
} else if unit.Command != "" { } else if unit.Command != "" {
commands[unit.Name] = unit.Command actions = append(actions, action{unit.Name, unit.Command})
} }
} }
if reload { if reload {
if err := system.DaemonReload(); err != nil { if err := um.DaemonReload(); err != nil {
return errors.New(fmt.Sprintf("failed systemd daemon-reload: %v", err)) return errors.New(fmt.Sprintf("failed systemd daemon-reload: %v", err))
} }
} }
for unit, command := range commands { for _, action := range actions {
log.Printf("Calling unit command '%s %s'", command, unit) log.Printf("Calling unit command '%s %s'", action.command, action.unit)
res, err := system.RunUnitCommand(command, unit) res, err := um.RunUnitCommand(action.command, action.unit)
if err != nil { if err != nil {
return err return err
} }
log.Printf("Result of '%s %s': %s", command, unit, res) log.Printf("Result of '%s %s': %s", action.command, action.unit, res)
} }
return nil return nil

View File

@@ -4,8 +4,38 @@ import (
"fmt" "fmt"
"strings" "strings"
"testing" "testing"
"github.com/coreos/coreos-cloudinit/system"
) )
func TestCloudConfigInvalidKeys(t *testing.T) {
defer func() {
if r := recover(); r != nil {
t.Fatalf("panic while instantiating CloudConfig with nil keys: %v", r)
}
}()
for _, tt := range []struct {
contents string
}{
{"coreos:"},
{"ssh_authorized_keys:"},
{"ssh_authorized_keys:\n -"},
{"ssh_authorized_keys:\n - 0:"},
{"write_files:"},
{"write_files:\n -"},
{"write_files:\n - 0:"},
{"users:"},
{"users:\n -"},
{"users:\n - 0:"},
} {
_, err := NewCloudConfig(tt.contents)
if err != nil {
t.Fatalf("error instantiating CloudConfig with invalid keys: %v", err)
}
}
}
func TestCloudConfigUnknownKeys(t *testing.T) { func TestCloudConfigUnknownKeys(t *testing.T) {
contents := ` contents := `
coreos: coreos:
@@ -332,3 +362,109 @@ users:
t.Errorf("Failed to parse no-log-init field") t.Errorf("Failed to parse no-log-init field")
} }
} }
type TestUnitManager struct {
placed []string
enabled []string
masked []string
unmasked []string
commands map[string]string
reload bool
}
func (tum *TestUnitManager) PlaceUnit(unit *system.Unit, dst string) error {
tum.placed = append(tum.placed, unit.Name)
return nil
}
func (tum *TestUnitManager) EnableUnitFile(unit string, runtime bool) error {
tum.enabled = append(tum.enabled, unit)
return nil
}
func (tum *TestUnitManager) RunUnitCommand(command, unit string) (string, error) {
tum.commands = make(map[string]string)
tum.commands[unit] = command
return "", nil
}
func (tum *TestUnitManager) DaemonReload() error {
tum.reload = true
return nil
}
func (tum *TestUnitManager) MaskUnit(unit *system.Unit) error {
tum.masked = append(tum.masked, unit.Name)
return nil
}
func (tum *TestUnitManager) UnmaskUnit(unit *system.Unit) error {
tum.unmasked = append(tum.unmasked, unit.Name)
return nil
}
func TestProcessUnits(t *testing.T) {
tum := &TestUnitManager{}
units := []system.Unit{
system.Unit{
Name: "foo",
Mask: true,
},
}
if err := processUnits(units, "", tum); err != nil {
t.Fatalf("unexpected error calling processUnits: %v", err)
}
if len(tum.masked) != 1 || tum.masked[0] != "foo" {
t.Errorf("expected foo to be masked, but found %v", tum.masked)
}
tum = &TestUnitManager{}
units = []system.Unit{
system.Unit{
Name: "bar.network",
},
}
if err := processUnits(units, "", tum); err != nil {
t.Fatalf("unexpected error calling processUnits: %v", err)
}
if _, ok := tum.commands["systemd-networkd.service"]; !ok {
t.Errorf("expected systemd-networkd.service to be reloaded!")
}
tum = &TestUnitManager{}
units = []system.Unit{
system.Unit{
Name: "baz.service",
Content: "[Service]\nExecStart=/bin/true",
},
}
if err := processUnits(units, "", tum); err != nil {
t.Fatalf("unexpected error calling processUnits: %v", err)
}
if len(tum.placed) != 1 || tum.placed[0] != "baz.service" {
t.Fatalf("expected baz.service to be written, but got %v", tum.placed)
}
tum = &TestUnitManager{}
units = []system.Unit{
system.Unit{
Name: "locksmithd.service",
Runtime: true,
},
}
if err := processUnits(units, "", tum); err != nil {
t.Fatalf("unexpected error calling processUnits: %v", err)
}
if len(tum.unmasked) != 1 || tum.unmasked[0] != "locksmithd.service" {
t.Fatalf("expected locksmithd.service to be unmasked, but got %v", tum.unmasked)
}
tum = &TestUnitManager{}
units = []system.Unit{
system.Unit{
Name: "woof",
Enable: true,
},
}
if err := processUnits(units, "", tum); err != nil {
t.Fatalf("unexpected error calling processUnits: %v", err)
}
if len(tum.enabled) != 1 || tum.enabled[0] != "woof" {
t.Fatalf("expected woof to be enabled, but got %v", tum.enabled)
}
}

View File

@@ -3,6 +3,7 @@ package initialize
import ( import (
"os" "os"
"path" "path"
"regexp"
"strings" "strings"
"github.com/coreos/coreos-cloudinit/system" "github.com/coreos/coreos-cloudinit/system"
@@ -28,6 +29,8 @@ func NewEnvironment(root, configRoot, workspace, netconfType, sshKeyName string,
for k, v := range map[string]string{ for k, v := range map[string]string{
"$public_ipv4": os.Getenv("COREOS_PUBLIC_IPV4"), "$public_ipv4": os.Getenv("COREOS_PUBLIC_IPV4"),
"$private_ipv4": os.Getenv("COREOS_PRIVATE_IPV4"), "$private_ipv4": os.Getenv("COREOS_PRIVATE_IPV4"),
"$public_ipv6": os.Getenv("COREOS_PUBLIC_IPV6"),
"$private_ipv6": os.Getenv("COREOS_PRIVATE_IPV6"),
} { } {
if _, ok := substitutions[k]; !ok { if _, ok := substitutions[k]; !ok {
substitutions[k] = v substitutions[k] = v
@@ -60,9 +63,18 @@ func (e *Environment) SetSSHKeyName(name string) {
e.sshKeyName = name e.sshKeyName = name
} }
// Apply goes through the map of substitutions and replaces all instances of
// the keys with their respective values. It supports escaping substitutions
// with a leading '\'.
func (e *Environment) Apply(data string) string { func (e *Environment) Apply(data string) string {
for key, val := range e.substitutions { for key, val := range e.substitutions {
data = strings.Replace(data, key, val, -1) matchKey := strings.Replace(key, `$`, `\$`, -1)
replKey := strings.Replace(key, `$`, `$$`, -1)
// "key" -> "val"
data = regexp.MustCompile(`([^\\]|^)`+matchKey).ReplaceAllString(data, `${1}`+val)
// "\key" -> "key"
data = regexp.MustCompile(`\\`+matchKey).ReplaceAllString(data, replKey)
} }
return data return data
} }
@@ -80,6 +92,12 @@ func (e *Environment) DefaultEnvironmentFile() *system.EnvFile {
if ip, ok := e.substitutions["$private_ipv4"]; ok && len(ip) > 0 { if ip, ok := e.substitutions["$private_ipv4"]; ok && len(ip) > 0 {
ef.Vars["COREOS_PRIVATE_IPV4"] = ip ef.Vars["COREOS_PRIVATE_IPV4"] = ip
} }
if ip, ok := e.substitutions["$public_ipv6"]; ok && len(ip) > 0 {
ef.Vars["COREOS_PUBLIC_IPV6"] = ip
}
if ip, ok := e.substitutions["$private_ipv6"]; ok && len(ip) > 0 {
ef.Vars["COREOS_PRIVATE_IPV6"] = ip
}
if len(ef.Vars) == 0 { if len(ef.Vars) == 0 {
return nil return nil
} else { } else {

View File

@@ -12,6 +12,8 @@ import (
func TestEnvironmentApply(t *testing.T) { func TestEnvironmentApply(t *testing.T) {
os.Setenv("COREOS_PUBLIC_IPV4", "1.2.3.4") os.Setenv("COREOS_PUBLIC_IPV4", "1.2.3.4")
os.Setenv("COREOS_PRIVATE_IPV4", "5.6.7.8") os.Setenv("COREOS_PRIVATE_IPV4", "5.6.7.8")
os.Setenv("COREOS_PUBLIC_IPV6", "1234::")
os.Setenv("COREOS_PRIVATE_IPV6", "5678::")
for _, tt := range []struct { for _, tt := range []struct {
subs map[string]string subs map[string]string
input string input string
@@ -23,14 +25,16 @@ func TestEnvironmentApply(t *testing.T) {
map[string]string{ map[string]string{
"$public_ipv4": "192.0.2.3", "$public_ipv4": "192.0.2.3",
"$private_ipv4": "192.0.2.203", "$private_ipv4": "192.0.2.203",
"$public_ipv6": "fe00:1234::",
"$private_ipv6": "fe00:5678::",
}, },
`[Service] `[Service]
ExecStart=/usr/bin/echo "$public_ipv4" ExecStart=/usr/bin/echo "$public_ipv4 $public_ipv6"
ExecStop=/usr/bin/echo $private_ipv4 ExecStop=/usr/bin/echo $private_ipv4 $private_ipv6
ExecStop=/usr/bin/echo $unknown`, ExecStop=/usr/bin/echo $unknown`,
`[Service] `[Service]
ExecStart=/usr/bin/echo "192.0.2.3" ExecStart=/usr/bin/echo "192.0.2.3 fe00:1234::"
ExecStop=/usr/bin/echo 192.0.2.203 ExecStop=/usr/bin/echo 192.0.2.203 fe00:5678::
ExecStop=/usr/bin/echo $unknown`, ExecStop=/usr/bin/echo $unknown`,
}, },
{ {
@@ -51,6 +55,24 @@ ExecStop=/usr/bin/echo $unknown`,
"$private_ipv4\nfoobar", "$private_ipv4\nfoobar",
"5.6.7.8\nfoobar", "5.6.7.8\nfoobar",
}, },
{
// Escaping substitutions
map[string]string{"$private_ipv4": "127.0.0.1"},
`\$private_ipv4
$private_ipv4
addr: \$private_ipv4
\\$private_ipv4`,
`$private_ipv4
127.0.0.1
addr: $private_ipv4
\$private_ipv4`,
},
{
// No substitutions with escaping
nil,
"\\$test\n$test",
"\\$test\n$test",
},
} { } {
env := NewEnvironment("./", "./", "./", "", "", tt.subs) env := NewEnvironment("./", "./", "./", "", "", tt.subs)
@@ -65,8 +87,10 @@ func TestEnvironmentFile(t *testing.T) {
subs := map[string]string{ subs := map[string]string{
"$public_ipv4": "1.2.3.4", "$public_ipv4": "1.2.3.4",
"$private_ipv4": "5.6.7.8", "$private_ipv4": "5.6.7.8",
"$public_ipv6": "1234::",
"$private_ipv6": "5678::",
} }
expect := "COREOS_PUBLIC_IPV4=1.2.3.4\nCOREOS_PRIVATE_IPV4=5.6.7.8\n" expect := "COREOS_PRIVATE_IPV4=5.6.7.8\nCOREOS_PRIVATE_IPV6=5678::\nCOREOS_PUBLIC_IPV4=1.2.3.4\nCOREOS_PUBLIC_IPV6=1234::\n"
dir, err := ioutil.TempDir(os.TempDir(), "coreos-cloudinit-") dir, err := ioutil.TempDir(os.TempDir(), "coreos-cloudinit-")
if err != nil { if err != nil {
@@ -96,6 +120,8 @@ func TestEnvironmentFileNil(t *testing.T) {
subs := map[string]string{ subs := map[string]string{
"$public_ipv4": "", "$public_ipv4": "",
"$private_ipv4": "", "$private_ipv4": "",
"$public_ipv6": "",
"$private_ipv6": "",
} }
env := NewEnvironment("./", "./", "./", "", "", subs) env := NewEnvironment("./", "./", "./", "", "", subs)

View File

@@ -39,7 +39,7 @@ func (ee EtcdEnvironment) String() (out string) {
// Units creates a Unit file drop-in for etcd, using any configured // Units creates a Unit file drop-in for etcd, using any configured
// options and adding a default MachineID if unset. // options and adding a default MachineID if unset.
func (ee EtcdEnvironment) Units(root string) ([]system.Unit, error) { func (ee EtcdEnvironment) Units(root string) ([]system.Unit, error) {
if ee == nil { if len(ee) < 1 {
return nil, nil return nil, nil
} }

View File

@@ -70,6 +70,8 @@ func TestEtcdEnvironmentWrittenToDisk(t *testing.T) {
} }
defer os.RemoveAll(dir) defer os.RemoveAll(dir)
sd := system.NewUnitManager(dir)
uu, err := ee.Units(dir) uu, err := ee.Units(dir)
if err != nil { if err != nil {
t.Fatalf("Generating etcd unit failed: %v", err) t.Fatalf("Generating etcd unit failed: %v", err)
@@ -81,7 +83,7 @@ func TestEtcdEnvironmentWrittenToDisk(t *testing.T) {
dst := u.Destination(dir) dst := u.Destination(dir)
os.Stderr.WriteString("writing to " + dir + "\n") os.Stderr.WriteString("writing to " + dir + "\n")
if err := system.PlaceUnit(&u, dst); err != nil { if err := sd.PlaceUnit(&u, dst); err != nil {
t.Fatalf("Writing of EtcdEnvironment failed: %v", err) t.Fatalf("Writing of EtcdEnvironment failed: %v", err)
} }
@@ -111,14 +113,27 @@ Environment="ETCD_PEER_BIND_ADDR=127.0.0.1:7002"
} }
} }
func TestEtcdEnvironmentWrittenToDiskDefaultToMachineID(t *testing.T) { func TestEtcdEnvironmentEmptyNoOp(t *testing.T) {
ee := EtcdEnvironment{} ee := EtcdEnvironment{}
uu, err := ee.Units("")
if err != nil {
t.Fatalf("Unexpected error: %v", err)
}
if len(uu) > 0 {
t.Fatalf("Generated etcd units unexpectedly: %v")
}
}
func TestEtcdEnvironmentWrittenToDiskDefaultToMachineID(t *testing.T) {
ee := EtcdEnvironment{"foo": "bar"}
dir, err := ioutil.TempDir(os.TempDir(), "coreos-cloudinit-") dir, err := ioutil.TempDir(os.TempDir(), "coreos-cloudinit-")
if err != nil { if err != nil {
t.Fatalf("Unable to create tempdir: %v", err) t.Fatalf("Unable to create tempdir: %v", err)
} }
defer os.RemoveAll(dir) defer os.RemoveAll(dir)
sd := system.NewUnitManager(dir)
os.Mkdir(path.Join(dir, "etc"), os.FileMode(0755)) os.Mkdir(path.Join(dir, "etc"), os.FileMode(0755))
err = ioutil.WriteFile(path.Join(dir, "etc", "machine-id"), []byte("node007"), os.FileMode(0444)) err = ioutil.WriteFile(path.Join(dir, "etc", "machine-id"), []byte("node007"), os.FileMode(0444))
if err != nil { if err != nil {
@@ -136,7 +151,7 @@ func TestEtcdEnvironmentWrittenToDiskDefaultToMachineID(t *testing.T) {
dst := u.Destination(dir) dst := u.Destination(dir)
os.Stderr.WriteString("writing to " + dir + "\n") os.Stderr.WriteString("writing to " + dir + "\n")
if err := system.PlaceUnit(&u, dst); err != nil { if err := sd.PlaceUnit(&u, dst); err != nil {
t.Fatalf("Writing of EtcdEnvironment failed: %v", err) t.Fatalf("Writing of EtcdEnvironment failed: %v", err)
} }
@@ -148,6 +163,7 @@ func TestEtcdEnvironmentWrittenToDiskDefaultToMachineID(t *testing.T) {
} }
expect := `[Service] expect := `[Service]
Environment="ETCD_FOO=bar"
Environment="ETCD_NAME=node007" Environment="ETCD_NAME=node007"
` `
if string(contents) != expect { if string(contents) != expect {

View File

@@ -1,9 +1,12 @@
package initialize package initialize
import "encoding/json" import (
"encoding/json"
"sort"
)
// ParseMetaData parses a JSON blob in the OpenStack metadata service format, and // ParseMetaData parses a JSON blob in the OpenStack metadata service format,
// converts it to a partially hydrated CloudConfig // and converts it to a partially hydrated CloudConfig.
func ParseMetaData(contents string) (*CloudConfig, error) { func ParseMetaData(contents string) (*CloudConfig, error) {
if len(contents) == 0 { if len(contents) == 0 {
return nil, nil return nil, nil
@@ -22,8 +25,8 @@ func ParseMetaData(contents string) (*CloudConfig, error) {
var cfg CloudConfig var cfg CloudConfig
if len(metadata.SSHAuthorizedKeyMap) > 0 { if len(metadata.SSHAuthorizedKeyMap) > 0 {
cfg.SSHAuthorizedKeys = make([]string, 0, len(metadata.SSHAuthorizedKeyMap)) cfg.SSHAuthorizedKeys = make([]string, 0, len(metadata.SSHAuthorizedKeyMap))
for _, key := range metadata.SSHAuthorizedKeyMap { for _, name := range sortedKeys(metadata.SSHAuthorizedKeyMap) {
cfg.SSHAuthorizedKeys = append(cfg.SSHAuthorizedKeys, key) cfg.SSHAuthorizedKeys = append(cfg.SSHAuthorizedKeys, metadata.SSHAuthorizedKeyMap[name])
} }
} }
cfg.Hostname = metadata.Hostname cfg.Hostname = metadata.Hostname
@@ -31,22 +34,39 @@ func ParseMetaData(contents string) (*CloudConfig, error) {
return &cfg, nil return &cfg, nil
} }
// ExtractIPsFromMetaData parses a JSON blob in the OpenStack metadata service format, // ExtractIPsFromMetaData parses a JSON blob in the OpenStack metadata service
// and returns a substitution map possibly containing private_ipv4 and public_ipv4 addresses // format and returns a substitution map possibly containing private_ipv4,
// public_ipv4, private_ipv6, and public_ipv6 addresses.
func ExtractIPsFromMetadata(contents []byte) (map[string]string, error) { func ExtractIPsFromMetadata(contents []byte) (map[string]string, error) {
var ips struct { var ips struct {
Public string `json:"public-ipv4"` PublicIPv4 string `json:"public-ipv4"`
Private string `json:"local-ipv4"` PrivateIPv4 string `json:"local-ipv4"`
PublicIPv6 string `json:"public-ipv6"`
PrivateIPv6 string `json:"local-ipv6"`
} }
if err := json.Unmarshal(contents, &ips); err != nil { if err := json.Unmarshal(contents, &ips); err != nil {
return nil, err return nil, err
} }
m := make(map[string]string) m := make(map[string]string)
if ips.Private != "" { if ips.PrivateIPv4 != "" {
m["$private_ipv4"] = ips.Private m["$private_ipv4"] = ips.PrivateIPv4
} }
if ips.Public != "" { if ips.PublicIPv4 != "" {
m["$public_ipv4"] = ips.Public m["$public_ipv4"] = ips.PublicIPv4
}
if ips.PrivateIPv6 != "" {
m["$private_ipv6"] = ips.PrivateIPv6
}
if ips.PublicIPv6 != "" {
m["$public_ipv6"] = ips.PublicIPv6
} }
return m, nil return m, nil
} }
func sortedKeys(m map[string]string) (keys []string) {
for key := range m {
keys = append(keys, key)
}
sort.Strings(keys)
return
}

View File

@@ -14,7 +14,7 @@ func TestParseMetadata(t *testing.T) {
{`{"foo": "bar"}`, &CloudConfig{}, false}, {`{"foo": "bar"}`, &CloudConfig{}, false},
{`{"network_config": {"content_path": "asdf"}}`, &CloudConfig{NetworkConfigPath: "asdf"}, false}, {`{"network_config": {"content_path": "asdf"}}`, &CloudConfig{NetworkConfigPath: "asdf"}, false},
{`{"hostname": "turkleton"}`, &CloudConfig{Hostname: "turkleton"}, false}, {`{"hostname": "turkleton"}`, &CloudConfig{Hostname: "turkleton"}, false},
{`{"public_keys": {"jack": "jill", "bob": "alice"}}`, &CloudConfig{SSHAuthorizedKeys: []string{"jill", "alice"}}, false}, {`{"public_keys": {"jack": "jill", "bob": "alice"}}`, &CloudConfig{SSHAuthorizedKeys: []string{"alice", "jill"}}, false},
{`{"unknown": "thing", "hostname": "my_host", "public_keys": {"do": "re", "mi": "fa"}, "network_config": {"content_path": "/root", "blah": "zzz"}}`, &CloudConfig{SSHAuthorizedKeys: []string{"re", "fa"}, Hostname: "my_host", NetworkConfigPath: "/root"}, false}, {`{"unknown": "thing", "hostname": "my_host", "public_keys": {"do": "re", "mi": "fa"}, "network_config": {"content_path": "/root", "blah": "zzz"}}`, &CloudConfig{SSHAuthorizedKeys: []string{"re", "fa"}, Hostname: "my_host", NetworkConfigPath: "/root"}, false},
} { } {
got, err := ParseMetaData(tt.in) got, err := ParseMetaData(tt.in)
@@ -43,9 +43,9 @@ func TestExtractIPsFromMetadata(t *testing.T) {
out map[string]string out map[string]string
}{ }{
{ {
[]byte(`{"public-ipv4": "12.34.56.78", "local-ipv4": "1.2.3.4"}`), []byte(`{"public-ipv4": "12.34.56.78", "local-ipv4": "1.2.3.4", "public-ipv6": "1234::", "local-ipv6": "5678::"}`),
false, false,
map[string]string{"$public_ipv4": "12.34.56.78", "$private_ipv4": "1.2.3.4"}, map[string]string{"$public_ipv4": "12.34.56.78", "$private_ipv4": "1.2.3.4", "$public_ipv6": "1234::", "$private_ipv6": "5678::"},
}, },
{ {
[]byte(`{"local-ipv4": "127.0.0.1", "something_else": "don't care"}`), []byte(`{"local-ipv4": "127.0.0.1", "something_else": "don't care"}`),

View File

@@ -16,7 +16,7 @@ func ParseUserData(contents string) (interface{}, error) {
// Explicitly trim the header so we can handle user-data from // Explicitly trim the header so we can handle user-data from
// non-unix operating systems. The rest of the file is parsed // non-unix operating systems. The rest of the file is parsed
// by goyaml, which correctly handles CRLF. // by yaml, which correctly handles CRLF.
header = strings.TrimSpace(header) header = strings.TrimSpace(header)
if strings.HasPrefix(header, "#!") { if strings.HasPrefix(header, "#!") {

View File

@@ -1,10 +1,12 @@
package network package network
import ( import (
"log"
"strings" "strings"
) )
func ProcessDebianNetconf(config string) ([]InterfaceGenerator, error) { func ProcessDebianNetconf(config string) ([]InterfaceGenerator, error) {
log.Println("Processing Debian network config")
lines := formatConfig(config) lines := formatConfig(config)
stanzas, err := parseStanzas(lines) stanzas, err := parseStanzas(lines)
if err != nil { if err != nil {
@@ -18,7 +20,9 @@ func ProcessDebianNetconf(config string) ([]InterfaceGenerator, error) {
interfaces = append(interfaces, s) interfaces = append(interfaces, s)
} }
} }
log.Printf("Parsed %d network interfaces\n", len(interfaces))
log.Println("Processed Debian network config")
return buildInterfaces(interfaces), nil return buildInterfaces(interfaces), nil
} }

142
network/digitalocean.go Normal file
View File

@@ -0,0 +1,142 @@
package network
import (
"encoding/json"
"fmt"
"log"
"net"
"github.com/coreos/coreos-cloudinit/datasource/metadata/digitalocean"
)
func ProcessDigitalOceanNetconf(config string) ([]InterfaceGenerator, error) {
log.Println("Processing DigitalOcean network config")
if config == "" {
return nil, nil
}
var cfg digitalocean.Metadata
if err := json.Unmarshal([]byte(config), &cfg); err != nil {
return nil, err
}
log.Println("Parsing nameservers")
nameservers, err := parseNameservers(cfg.DNS)
if err != nil {
return nil, err
}
log.Printf("Parsed %d nameservers\n", len(nameservers))
log.Println("Parsing interfaces")
generators, err := parseInterfaces(cfg.Interfaces, nameservers)
if err != nil {
return nil, err
}
log.Printf("Parsed %d network interfaces\n", len(generators))
log.Println("Processed DigitalOcean network config")
return generators, nil
}
func parseNameservers(cfg digitalocean.DNS) ([]net.IP, error) {
nameservers := make([]net.IP, 0, len(cfg.Nameservers))
for _, ns := range cfg.Nameservers {
if ip := net.ParseIP(ns); ip == nil {
return nil, fmt.Errorf("could not parse %q as nameserver IP address", ns)
} else {
nameservers = append(nameservers, ip)
}
}
return nameservers, nil
}
func parseInterfaces(cfg digitalocean.Interfaces, nameservers []net.IP) ([]InterfaceGenerator, error) {
generators := make([]InterfaceGenerator, 0, len(cfg.Public)+len(cfg.Private))
for _, iface := range cfg.Public {
if generator, err := parseInterface(iface, nameservers, true); err == nil {
generators = append(generators, &physicalInterface{*generator})
} else {
return nil, err
}
}
for _, iface := range cfg.Private {
if generator, err := parseInterface(iface, []net.IP{}, false); err == nil {
generators = append(generators, &physicalInterface{*generator})
} else {
return nil, err
}
}
return generators, nil
}
func parseInterface(iface digitalocean.Interface, nameservers []net.IP, useRoute bool) (*logicalInterface, error) {
routes := make([]route, 0)
addresses := make([]net.IPNet, 0)
if iface.IPv4 != nil {
var ip, mask, gateway net.IP
if ip = net.ParseIP(iface.IPv4.IPAddress); ip == nil {
return nil, fmt.Errorf("could not parse %q as IPv4 address", iface.IPv4.IPAddress)
}
if mask = net.ParseIP(iface.IPv4.Netmask); mask == nil {
return nil, fmt.Errorf("could not parse %q as IPv4 mask", iface.IPv4.Netmask)
}
addresses = append(addresses, net.IPNet{
IP: ip,
Mask: net.IPMask(mask),
})
if useRoute {
if gateway = net.ParseIP(iface.IPv4.Gateway); gateway == nil {
return nil, fmt.Errorf("could not parse %q as IPv4 gateway", iface.IPv4.Gateway)
}
routes = append(routes, route{
destination: net.IPNet{
IP: net.IPv4zero,
Mask: net.IPMask(net.IPv4zero),
},
gateway: gateway,
})
}
}
if iface.IPv6 != nil {
var ip, gateway net.IP
if ip = net.ParseIP(iface.IPv6.IPAddress); ip == nil {
return nil, fmt.Errorf("could not parse %q as IPv6 address", iface.IPv6.IPAddress)
}
addresses = append(addresses, net.IPNet{
IP: ip,
Mask: net.CIDRMask(iface.IPv6.Cidr, net.IPv6len*8),
})
if useRoute {
if gateway = net.ParseIP(iface.IPv6.Gateway); gateway == nil {
return nil, fmt.Errorf("could not parse %q as IPv6 gateway", iface.IPv6.Gateway)
}
routes = append(routes, route{
destination: net.IPNet{
IP: net.IPv6zero,
Mask: net.IPMask(net.IPv6zero),
},
gateway: gateway,
})
}
}
hwaddr, err := net.ParseMAC(iface.MAC)
if err != nil {
return nil, err
}
if nameservers == nil {
nameservers = []net.IP{}
}
return &logicalInterface{
hwaddr: hwaddr,
config: configMethodStatic{
addresses: addresses,
nameservers: nameservers,
routes: routes,
},
}, nil
}

View File

@@ -0,0 +1,367 @@
package network
import (
"errors"
"net"
"reflect"
"testing"
"github.com/coreos/coreos-cloudinit/datasource/metadata/digitalocean"
)
func TestParseNameservers(t *testing.T) {
for _, tt := range []struct {
dns digitalocean.DNS
nss []net.IP
err error
}{
{
dns: digitalocean.DNS{},
nss: []net.IP{},
},
{
dns: digitalocean.DNS{[]string{"1.2.3.4"}},
nss: []net.IP{net.ParseIP("1.2.3.4")},
},
{
dns: digitalocean.DNS{[]string{"bad"}},
err: errors.New("could not parse \"bad\" as nameserver IP address"),
},
} {
nss, err := parseNameservers(tt.dns)
if !errorsEqual(tt.err, err) {
t.Fatalf("bad error (%+v): want %q, got %q", tt.dns, tt.err, err)
}
if !reflect.DeepEqual(tt.nss, nss) {
t.Fatalf("bad nameservers (%+v): want %#v, got %#v", tt.dns, tt.nss, nss)
}
}
}
func TestParseInterface(t *testing.T) {
for _, tt := range []struct {
cfg digitalocean.Interface
nss []net.IP
useRoute bool
iface *logicalInterface
err error
}{
{
cfg: digitalocean.Interface{
MAC: "bad",
},
err: errors.New("invalid MAC address: bad"),
},
{
cfg: digitalocean.Interface{
MAC: "01:23:45:67:89:AB",
},
nss: []net.IP{},
iface: &logicalInterface{
hwaddr: net.HardwareAddr([]byte{0x01, 0x23, 0x45, 0x67, 0x89, 0xab}),
config: configMethodStatic{
addresses: []net.IPNet{},
nameservers: []net.IP{},
routes: []route{},
},
},
},
{
cfg: digitalocean.Interface{
MAC: "01:23:45:67:89:AB",
},
useRoute: true,
nss: []net.IP{net.ParseIP("1.2.3.4")},
iface: &logicalInterface{
hwaddr: net.HardwareAddr([]byte{0x01, 0x23, 0x45, 0x67, 0x89, 0xab}),
config: configMethodStatic{
addresses: []net.IPNet{},
nameservers: []net.IP{net.ParseIP("1.2.3.4")},
routes: []route{},
},
},
},
{
cfg: digitalocean.Interface{
MAC: "01:23:45:67:89:AB",
IPv4: &digitalocean.Address{
IPAddress: "bad",
Netmask: "255.255.0.0",
},
},
nss: []net.IP{},
err: errors.New("could not parse \"bad\" as IPv4 address"),
},
{
cfg: digitalocean.Interface{
MAC: "01:23:45:67:89:AB",
IPv4: &digitalocean.Address{
IPAddress: "1.2.3.4",
Netmask: "bad",
},
},
nss: []net.IP{},
err: errors.New("could not parse \"bad\" as IPv4 mask"),
},
{
cfg: digitalocean.Interface{
MAC: "01:23:45:67:89:AB",
IPv4: &digitalocean.Address{
IPAddress: "1.2.3.4",
Netmask: "255.255.0.0",
Gateway: "ignoreme",
},
},
nss: []net.IP{},
iface: &logicalInterface{
hwaddr: net.HardwareAddr([]byte{0x01, 0x23, 0x45, 0x67, 0x89, 0xab}),
config: configMethodStatic{
addresses: []net.IPNet{net.IPNet{net.ParseIP("1.2.3.4"), net.IPMask(net.ParseIP("255.255.0.0"))}},
nameservers: []net.IP{},
routes: []route{},
},
},
},
{
cfg: digitalocean.Interface{
MAC: "01:23:45:67:89:AB",
IPv4: &digitalocean.Address{
IPAddress: "1.2.3.4",
Netmask: "255.255.0.0",
Gateway: "bad",
},
},
useRoute: true,
nss: []net.IP{},
err: errors.New("could not parse \"bad\" as IPv4 gateway"),
},
{
cfg: digitalocean.Interface{
MAC: "01:23:45:67:89:AB",
IPv4: &digitalocean.Address{
IPAddress: "1.2.3.4",
Netmask: "255.255.0.0",
Gateway: "5.6.7.8",
},
},
useRoute: true,
nss: []net.IP{},
iface: &logicalInterface{
hwaddr: net.HardwareAddr([]byte{0x01, 0x23, 0x45, 0x67, 0x89, 0xab}),
config: configMethodStatic{
addresses: []net.IPNet{net.IPNet{net.ParseIP("1.2.3.4"), net.IPMask(net.ParseIP("255.255.0.0"))}},
nameservers: []net.IP{},
routes: []route{route{net.IPNet{net.IPv4zero, net.IPMask(net.IPv4zero)}, net.ParseIP("5.6.7.8")}},
},
},
},
{
cfg: digitalocean.Interface{
MAC: "01:23:45:67:89:AB",
IPv6: &digitalocean.Address{
IPAddress: "bad",
Cidr: 16,
},
},
nss: []net.IP{},
err: errors.New("could not parse \"bad\" as IPv6 address"),
},
{
cfg: digitalocean.Interface{
MAC: "01:23:45:67:89:AB",
IPv6: &digitalocean.Address{
IPAddress: "fe00::",
Cidr: 16,
Gateway: "ignoreme",
},
},
nss: []net.IP{},
iface: &logicalInterface{
hwaddr: net.HardwareAddr([]byte{0x01, 0x23, 0x45, 0x67, 0x89, 0xab}),
config: configMethodStatic{
addresses: []net.IPNet{net.IPNet{net.ParseIP("fe00::"), net.IPMask(net.ParseIP("ffff::"))}},
nameservers: []net.IP{},
routes: []route{},
},
},
},
{
cfg: digitalocean.Interface{
MAC: "01:23:45:67:89:AB",
IPv6: &digitalocean.Address{
IPAddress: "fe00::",
Cidr: 16,
Gateway: "bad",
},
},
useRoute: true,
nss: []net.IP{},
err: errors.New("could not parse \"bad\" as IPv6 gateway"),
},
{
cfg: digitalocean.Interface{
MAC: "01:23:45:67:89:AB",
IPv6: &digitalocean.Address{
IPAddress: "fe00::",
Cidr: 16,
Gateway: "fe00:1234::",
},
},
useRoute: true,
nss: []net.IP{},
iface: &logicalInterface{
hwaddr: net.HardwareAddr([]byte{0x01, 0x23, 0x45, 0x67, 0x89, 0xab}),
config: configMethodStatic{
addresses: []net.IPNet{net.IPNet{net.ParseIP("fe00::"), net.IPMask(net.ParseIP("ffff::"))}},
nameservers: []net.IP{},
routes: []route{route{net.IPNet{net.IPv6zero, net.IPMask(net.IPv6zero)}, net.ParseIP("fe00:1234::")}},
},
},
},
} {
iface, err := parseInterface(tt.cfg, tt.nss, tt.useRoute)
if !errorsEqual(tt.err, err) {
t.Fatalf("bad error (%+v): want %q, got %q", tt.cfg, tt.err, err)
}
if !reflect.DeepEqual(tt.iface, iface) {
t.Fatalf("bad interface (%+v): want %#v, got %#v", tt.cfg, tt.iface, iface)
}
}
}
func TestParseInterfaces(t *testing.T) {
for _, tt := range []struct {
cfg digitalocean.Interfaces
nss []net.IP
ifaces []InterfaceGenerator
err error
}{
{
ifaces: []InterfaceGenerator{},
},
{
cfg: digitalocean.Interfaces{
Public: []digitalocean.Interface{{MAC: "01:23:45:67:89:AB"}},
},
ifaces: []InterfaceGenerator{
&physicalInterface{logicalInterface{
hwaddr: net.HardwareAddr([]byte{0x01, 0x23, 0x45, 0x67, 0x89, 0xab}),
config: configMethodStatic{
addresses: []net.IPNet{},
nameservers: []net.IP{},
routes: []route{},
},
}},
},
},
{
cfg: digitalocean.Interfaces{
Private: []digitalocean.Interface{{MAC: "01:23:45:67:89:AB"}},
},
ifaces: []InterfaceGenerator{
&physicalInterface{logicalInterface{
hwaddr: net.HardwareAddr([]byte{0x01, 0x23, 0x45, 0x67, 0x89, 0xab}),
config: configMethodStatic{
addresses: []net.IPNet{},
nameservers: []net.IP{},
routes: []route{},
},
}},
},
},
{
cfg: digitalocean.Interfaces{
Public: []digitalocean.Interface{{MAC: "01:23:45:67:89:AB"}},
},
nss: []net.IP{net.ParseIP("1.2.3.4")},
ifaces: []InterfaceGenerator{
&physicalInterface{logicalInterface{
hwaddr: net.HardwareAddr([]byte{0x01, 0x23, 0x45, 0x67, 0x89, 0xab}),
config: configMethodStatic{
addresses: []net.IPNet{},
nameservers: []net.IP{net.ParseIP("1.2.3.4")},
routes: []route{},
},
}},
},
},
{
cfg: digitalocean.Interfaces{
Private: []digitalocean.Interface{{MAC: "01:23:45:67:89:AB"}},
},
nss: []net.IP{net.ParseIP("1.2.3.4")},
ifaces: []InterfaceGenerator{
&physicalInterface{logicalInterface{
hwaddr: net.HardwareAddr([]byte{0x01, 0x23, 0x45, 0x67, 0x89, 0xab}),
config: configMethodStatic{
addresses: []net.IPNet{},
nameservers: []net.IP{},
routes: []route{},
},
}},
},
},
{
cfg: digitalocean.Interfaces{
Public: []digitalocean.Interface{{MAC: "bad"}},
},
err: errors.New("invalid MAC address: bad"),
},
{
cfg: digitalocean.Interfaces{
Private: []digitalocean.Interface{{MAC: "bad"}},
},
err: errors.New("invalid MAC address: bad"),
},
} {
ifaces, err := parseInterfaces(tt.cfg, tt.nss)
if !errorsEqual(tt.err, err) {
t.Fatalf("bad error (%+v): want %q, got %q", tt.cfg, tt.err, err)
}
if !reflect.DeepEqual(tt.ifaces, ifaces) {
t.Fatalf("bad interfaces (%+v): want %#v, got %#v", tt.cfg, tt.ifaces, ifaces)
}
}
}
func TestProcessDigitalOceanNetconf(t *testing.T) {
for _, tt := range []struct {
cfg string
ifaces []InterfaceGenerator
err error
}{
{
cfg: ``,
},
{
cfg: `{"dns":{"nameservers":["bad"]}}`,
err: errors.New("could not parse \"bad\" as nameserver IP address"),
},
{
cfg: `{"interfaces":{"public":[{"ipv4":{"ip_address":"bad"}}]}}`,
err: errors.New("could not parse \"bad\" as IPv4 address"),
},
{
cfg: `{}`,
ifaces: []InterfaceGenerator{},
},
} {
ifaces, err := ProcessDigitalOceanNetconf(tt.cfg)
if !errorsEqual(tt.err, err) {
t.Fatalf("bad error (%q): want %q, got %q", tt.cfg, tt.err, err)
}
if !reflect.DeepEqual(tt.ifaces, ifaces) {
t.Fatalf("bad interfaces (%q): want %#v, got %#v", tt.cfg, tt.ifaces, ifaces)
}
}
}
func errorsEqual(a, b error) bool {
if a == nil && b == nil {
return true
}
if (a != nil && b == nil) || (a == nil && b != nil) {
return false
}
return (a.Error() == b.Error())
}

View File

@@ -2,7 +2,10 @@ package network
import ( import (
"fmt" "fmt"
"net"
"sort"
"strconv" "strconv"
"strings"
) )
type InterfaceGenerator interface { type InterfaceGenerator interface {
@@ -11,6 +14,8 @@ type InterfaceGenerator interface {
Netdev() string Netdev() string
Link() string Link() string
Network() string Network() string
Type() string
ModprobeParams() string
} }
type networkInterface interface { type networkInterface interface {
@@ -21,13 +26,25 @@ type networkInterface interface {
type logicalInterface struct { type logicalInterface struct {
name string name string
hwaddr net.HardwareAddr
config configMethod config configMethod
children []networkInterface children []networkInterface
configDepth int configDepth int
} }
func (i *logicalInterface) Name() string {
return i.name
}
func (i *logicalInterface) Network() string { func (i *logicalInterface) Network() string {
config := fmt.Sprintf("[Match]\nName=%s\n\n[Network]\n", i.name) config := fmt.Sprintln("[Match]")
if i.name != "" {
config += fmt.Sprintf("Name=%s\n", i.name)
}
if i.hwaddr != nil {
config += fmt.Sprintf("MACAddress=%s\n", i.hwaddr)
}
config += "\n[Network]\n"
for _, child := range i.children { for _, child := range i.children {
switch iface := child.(type) { switch iface := child.(type) {
@@ -43,8 +60,8 @@ func (i *logicalInterface) Network() string {
for _, nameserver := range conf.nameservers { for _, nameserver := range conf.nameservers {
config += fmt.Sprintf("DNS=%s\n", nameserver) config += fmt.Sprintf("DNS=%s\n", nameserver)
} }
if conf.address.IP != nil { for _, addr := range conf.addresses {
config += fmt.Sprintf("\n[Address]\nAddress=%s\n", conf.address.String()) config += fmt.Sprintf("\n[Address]\nAddress=%s\n", addr.String())
} }
for _, route := range conf.routes { for _, route := range conf.routes {
config += fmt.Sprintf("\n[Route]\nDestination=%s\nGateway=%s\n", route.destination.String(), route.gateway) config += fmt.Sprintf("\n[Route]\nDestination=%s\nGateway=%s\n", route.destination.String(), route.gateway)
@@ -60,14 +77,26 @@ func (i *logicalInterface) Link() string {
return "" return ""
} }
func (i *logicalInterface) Netdev() string {
return ""
}
func (i *logicalInterface) Filename() string { func (i *logicalInterface) Filename() string {
return fmt.Sprintf("%02x-%s", i.configDepth, i.name) name := i.name
if name == "" {
name = i.hwaddr.String()
}
return fmt.Sprintf("%02x-%s", i.configDepth, name)
} }
func (i *logicalInterface) Children() []networkInterface { func (i *logicalInterface) Children() []networkInterface {
return i.children return i.children
} }
func (i *logicalInterface) ModprobeParams() string {
return ""
}
func (i *logicalInterface) setConfigDepth(depth int) { func (i *logicalInterface) setConfigDepth(depth int) {
i.configDepth = depth i.configDepth = depth
} }
@@ -76,37 +105,39 @@ type physicalInterface struct {
logicalInterface logicalInterface
} }
func (p *physicalInterface) Name() string { func (p *physicalInterface) Type() string {
return p.name return "physical"
}
func (p *physicalInterface) Netdev() string {
return ""
} }
type bondInterface struct { type bondInterface struct {
logicalInterface logicalInterface
slaves []string slaves []string
} options map[string]string
func (b *bondInterface) Name() string {
return b.name
} }
func (b *bondInterface) Netdev() string { func (b *bondInterface) Netdev() string {
return fmt.Sprintf("[NetDev]\nKind=bond\nName=%s\n", b.name) return fmt.Sprintf("[NetDev]\nKind=bond\nName=%s\n", b.name)
} }
func (b *bondInterface) Type() string {
return "bond"
}
func (b *bondInterface) ModprobeParams() string {
params := ""
for _, name := range sortedKeys(b.options) {
params += fmt.Sprintf("%s=%s ", name, b.options[name])
}
params = strings.TrimSuffix(params, " ")
return params
}
type vlanInterface struct { type vlanInterface struct {
logicalInterface logicalInterface
id int id int
rawDevice string rawDevice string
} }
func (v *vlanInterface) Name() string {
return v.name
}
func (v *vlanInterface) Netdev() string { func (v *vlanInterface) Netdev() string {
config := fmt.Sprintf("[NetDev]\nKind=vlan\nName=%s\n", v.name) config := fmt.Sprintf("[NetDev]\nKind=vlan\nName=%s\n", v.name)
switch c := v.config.(type) { switch c := v.config.(type) {
@@ -123,14 +154,18 @@ func (v *vlanInterface) Netdev() string {
return config return config
} }
func (v *vlanInterface) Type() string {
return "vlan"
}
func buildInterfaces(stanzas []*stanzaInterface) []InterfaceGenerator { func buildInterfaces(stanzas []*stanzaInterface) []InterfaceGenerator {
interfaceMap := createInterfaces(stanzas) interfaceMap := createInterfaces(stanzas)
linkAncestors(interfaceMap) linkAncestors(interfaceMap)
markConfigDepths(interfaceMap) markConfigDepths(interfaceMap)
interfaces := make([]InterfaceGenerator, 0, len(interfaceMap)) interfaces := make([]InterfaceGenerator, 0, len(interfaceMap))
for _, iface := range interfaceMap { for _, name := range sortedInterfaces(interfaceMap) {
interfaces = append(interfaces, iface) interfaces = append(interfaces, interfaceMap[name])
} }
return interfaces return interfaces
@@ -141,15 +176,22 @@ func createInterfaces(stanzas []*stanzaInterface) map[string]networkInterface {
for _, iface := range stanzas { for _, iface := range stanzas {
switch iface.kind { switch iface.kind {
case interfaceBond: case interfaceBond:
bondOptions := make(map[string]string)
for _, k := range []string{"mode", "miimon", "lacp-rate"} {
if v, ok := iface.options["bond-"+k]; ok && len(v) > 0 {
bondOptions[k] = v[0]
}
}
interfaceMap[iface.name] = &bondInterface{ interfaceMap[iface.name] = &bondInterface{
logicalInterface{ logicalInterface{
name: iface.name, name: iface.name,
config: iface.configMethod, config: iface.configMethod,
children: []networkInterface{}, children: []networkInterface{},
}, },
iface.options["slaves"], iface.options["bond-slaves"],
bondOptions,
} }
for _, slave := range iface.options["slaves"] { for _, slave := range iface.options["bond-slaves"] {
if _, ok := interfaceMap[slave]; !ok { if _, ok := interfaceMap[slave]; !ok {
interfaceMap[slave] = &physicalInterface{ interfaceMap[slave] = &physicalInterface{
logicalInterface{ logicalInterface{
@@ -203,7 +245,8 @@ func createInterfaces(stanzas []*stanzaInterface) map[string]networkInterface {
} }
func linkAncestors(interfaceMap map[string]networkInterface) { func linkAncestors(interfaceMap map[string]networkInterface) {
for _, iface := range interfaceMap { for _, name := range sortedInterfaces(interfaceMap) {
iface := interfaceMap[name]
switch i := iface.(type) { switch i := iface.(type) {
case *vlanInterface: case *vlanInterface:
if parent, ok := interfaceMap[i.rawDevice]; ok { if parent, ok := interfaceMap[i.rawDevice]; ok {
@@ -241,13 +284,33 @@ func markConfigDepths(interfaceMap map[string]networkInterface) {
} }
} }
for _, iface := range rootInterfaceMap { for _, iface := range rootInterfaceMap {
setDepth(iface, 0) setDepth(iface)
} }
} }
func setDepth(iface networkInterface, depth int) { func setDepth(iface networkInterface) int {
iface.setConfigDepth(depth) maxDepth := 0
for _, child := range iface.Children() { for _, child := range iface.Children() {
setDepth(child, depth+1) if depth := setDepth(child); depth > maxDepth {
maxDepth = depth
} }
}
iface.setConfigDepth(maxDepth)
return (maxDepth + 1)
}
func sortedKeys(m map[string]string) (keys []string) {
for key := range m {
keys = append(keys, key)
}
sort.Strings(keys)
return
}
func sortedInterfaces(m map[string]networkInterface) (keys []string) {
for key := range m {
keys = append(keys, key)
}
sort.Strings(keys)
return
} }

View File

@@ -6,215 +6,133 @@ import (
"testing" "testing"
) )
func TestPhysicalInterfaceName(t *testing.T) { func TestInterfaceGenerators(t *testing.T) {
p := physicalInterface{logicalInterface{name: "testname"}} for _, tt := range []struct {
if p.Name() != "testname" { name string
t.FailNow() netdev string
} link string
} network string
kind string
func TestPhysicalInterfaceNetdev(t *testing.T) { iface InterfaceGenerator
p := physicalInterface{} }{
if p.Netdev() != "" { {
t.FailNow() name: "",
} network: "[Match]\nMACAddress=00:01:02:03:04:05\n\n[Network]\n",
} kind: "physical",
iface: &physicalInterface{logicalInterface{
func TestPhysicalInterfaceLink(t *testing.T) { hwaddr: net.HardwareAddr([]byte{0, 1, 2, 3, 4, 5}),
p := physicalInterface{} }},
if p.Link() != "" { },
t.FailNow() {
} name: "testname",
} network: "[Match]\nName=testname\n\n[Network]\nBond=testbond1\nVLAN=testvlan1\nVLAN=testvlan2\n",
kind: "physical",
func TestPhysicalInterfaceNetwork(t *testing.T) { iface: &physicalInterface{logicalInterface{
p := physicalInterface{logicalInterface{
name: "testname", name: "testname",
children: []networkInterface{ children: []networkInterface{
&bondInterface{ &bondInterface{logicalInterface: logicalInterface{name: "testbond1"}},
logicalInterface{ &vlanInterface{logicalInterface: logicalInterface{name: "testvlan1"}, id: 1},
name: "testbond1", &vlanInterface{logicalInterface: logicalInterface{name: "testvlan2"}, id: 1},
}, },
nil, }},
}, },
&vlanInterface{ {
logicalInterface{ name: "testname",
name: "testvlan1", netdev: "[NetDev]\nKind=bond\nName=testname\n",
}, network: "[Match]\nName=testname\n\n[Network]\nBond=testbond1\nVLAN=testvlan1\nVLAN=testvlan2\nDHCP=true\n",
1, kind: "bond",
"", iface: &bondInterface{logicalInterface: logicalInterface{
},
&vlanInterface{
logicalInterface{
name: "testvlan2",
},
1,
"",
},
},
}}
network := `[Match]
Name=testname
[Network]
Bond=testbond1
VLAN=testvlan1
VLAN=testvlan2
`
if p.Network() != network {
t.FailNow()
}
}
func TestBondInterfaceName(t *testing.T) {
b := bondInterface{logicalInterface{name: "testname"}, nil}
if b.Name() != "testname" {
t.FailNow()
}
}
func TestBondInterfaceNetdev(t *testing.T) {
b := bondInterface{logicalInterface{name: "testname"}, nil}
netdev := `[NetDev]
Kind=bond
Name=testname
`
if b.Netdev() != netdev {
t.FailNow()
}
}
func TestBondInterfaceLink(t *testing.T) {
b := bondInterface{}
if b.Link() != "" {
t.FailNow()
}
}
func TestBondInterfaceNetwork(t *testing.T) {
b := bondInterface{
logicalInterface{
name: "testname", name: "testname",
config: configMethodDHCP{}, config: configMethodDHCP{},
children: []networkInterface{ children: []networkInterface{
&bondInterface{ &bondInterface{logicalInterface: logicalInterface{name: "testbond1"}},
logicalInterface{ &vlanInterface{logicalInterface: logicalInterface{name: "testvlan1"}, id: 1},
name: "testbond1", &vlanInterface{logicalInterface: logicalInterface{name: "testvlan2"}, id: 1},
}, },
nil, }},
},
&vlanInterface{
logicalInterface{
name: "testvlan1",
},
1,
"",
},
&vlanInterface{
logicalInterface{
name: "testvlan2",
},
1,
"",
},
},
},
nil,
}
network := `[Match]
Name=testname
[Network]
Bond=testbond1
VLAN=testvlan1
VLAN=testvlan2
DHCP=true
`
if b.Network() != network {
t.FailNow()
}
}
func TestVLANInterfaceName(t *testing.T) {
v := vlanInterface{logicalInterface{name: "testname"}, 1, ""}
if v.Name() != "testname" {
t.FailNow()
}
}
func TestVLANInterfaceNetdev(t *testing.T) {
for _, tt := range []struct {
i vlanInterface
l string
}{
{
vlanInterface{logicalInterface{name: "testname"}, 1, ""},
"[NetDev]\nKind=vlan\nName=testname\n\n[VLAN]\nId=1\n",
}, },
{ {
vlanInterface{logicalInterface{name: "testname", config: configMethodStatic{hwaddress: net.HardwareAddr([]byte{0, 1, 2, 3, 4, 5})}}, 1, ""}, name: "testname",
"[NetDev]\nKind=vlan\nName=testname\nMACAddress=00:01:02:03:04:05\n\n[VLAN]\nId=1\n", netdev: "[NetDev]\nKind=vlan\nName=testname\n\n[VLAN]\nId=1\n",
network: "[Match]\nName=testname\n\n[Network]\n",
kind: "vlan",
iface: &vlanInterface{logicalInterface{name: "testname"}, 1, ""},
}, },
{ {
vlanInterface{logicalInterface{name: "testname", config: configMethodDHCP{hwaddress: net.HardwareAddr([]byte{0, 1, 2, 3, 4, 5})}}, 1, ""}, name: "testname",
"[NetDev]\nKind=vlan\nName=testname\nMACAddress=00:01:02:03:04:05\n\n[VLAN]\nId=1\n", netdev: "[NetDev]\nKind=vlan\nName=testname\nMACAddress=00:01:02:03:04:05\n\n[VLAN]\nId=1\n",
network: "[Match]\nName=testname\n\n[Network]\n",
kind: "vlan",
iface: &vlanInterface{logicalInterface{name: "testname", config: configMethodStatic{hwaddress: net.HardwareAddr([]byte{0, 1, 2, 3, 4, 5})}}, 1, ""},
}, },
} { {
if tt.i.Netdev() != tt.l { name: "testname",
t.Fatalf("bad netdev config (%q): got %q, want %q", tt.i, tt.i.Netdev(), tt.l) netdev: "[NetDev]\nKind=vlan\nName=testname\nMACAddress=00:01:02:03:04:05\n\n[VLAN]\nId=1\n",
} network: "[Match]\nName=testname\n\n[Network]\nDHCP=true\n",
} kind: "vlan",
} iface: &vlanInterface{logicalInterface{name: "testname", config: configMethodDHCP{hwaddress: net.HardwareAddr([]byte{0, 1, 2, 3, 4, 5})}}, 1, ""},
},
func TestVLANInterfaceLink(t *testing.T) { {
v := vlanInterface{} name: "testname",
if v.Link() != "" { netdev: "[NetDev]\nKind=vlan\nName=testname\n\n[VLAN]\nId=0\n",
t.FailNow() network: "[Match]\nName=testname\n\n[Network]\nDNS=8.8.8.8\n\n[Address]\nAddress=192.168.1.100/24\n\n[Route]\nDestination=0.0.0.0/0\nGateway=1.2.3.4\n",
} kind: "vlan",
} iface: &vlanInterface{logicalInterface: logicalInterface{
func TestVLANInterfaceNetwork(t *testing.T) {
v := vlanInterface{
logicalInterface{
name: "testname", name: "testname",
config: configMethodStatic{ config: configMethodStatic{
address: net.IPNet{ addresses: []net.IPNet{{IP: []byte{192, 168, 1, 100}, Mask: []byte{255, 255, 255, 0}}},
IP: []byte{192, 168, 1, 100}, nameservers: []net.IP{[]byte{8, 8, 8, 8}},
Mask: []byte{255, 255, 255, 0}, routes: []route{route{destination: net.IPNet{IP: []byte{0, 0, 0, 0}, Mask: []byte{0, 0, 0, 0}}, gateway: []byte{1, 2, 3, 4}}},
}, },
nameservers: []net.IP{ }},
[]byte{8, 8, 8, 8},
}, },
routes: []route{ } {
route{ if name := tt.iface.Name(); name != tt.name {
destination: net.IPNet{ t.Fatalf("bad name (%q): want %q, got %q", tt.iface, tt.name, name)
IP: []byte{0, 0, 0, 0},
Mask: []byte{0, 0, 0, 0},
},
gateway: []byte{1, 2, 3, 4},
},
},
},
},
0,
"",
} }
network := `[Match] if netdev := tt.iface.Netdev(); netdev != tt.netdev {
Name=testname t.Fatalf("bad netdev (%q): want %q, got %q", tt.iface, tt.netdev, netdev)
}
if link := tt.iface.Link(); link != tt.link {
t.Fatalf("bad link (%q): want %q, got %q", tt.iface, tt.link, link)
}
if network := tt.iface.Network(); network != tt.network {
t.Fatalf("bad network (%q): want %q, got %q", tt.iface, tt.network, network)
}
if kind := tt.iface.Type(); kind != tt.kind {
t.Fatalf("bad type (%q): want %q, got %q", tt.iface, tt.kind, kind)
}
}
}
[Network] func TestModprobeParams(t *testing.T) {
DNS=8.8.8.8 for _, tt := range []struct {
i InterfaceGenerator
[Address] p string
Address=192.168.1.100/24 }{
{
[Route] i: &physicalInterface{},
Destination=0.0.0.0/0 p: "",
Gateway=1.2.3.4 },
` {
if v.Network() != network { i: &vlanInterface{},
t.Log(v.Network()) p: "",
t.FailNow() },
{
i: &bondInterface{
logicalInterface{},
nil,
map[string]string{
"a": "1",
"b": "2",
},
},
p: "a=1 b=2",
},
} {
if p := tt.i.ModprobeParams(); p != tt.p {
t.Fatalf("bad params (%q): got %s, want %s", tt.i, p, tt.p)
}
} }
} }
@@ -242,7 +160,7 @@ func TestBuildInterfacesBlindBond(t *testing.T) {
auto: false, auto: false,
configMethod: configMethodManual{}, configMethod: configMethodManual{},
options: map[string][]string{ options: map[string][]string{
"slaves": []string{"eth0"}, "bond-slaves": []string{"eth0"},
}, },
}, },
} }
@@ -252,16 +170,17 @@ func TestBuildInterfacesBlindBond(t *testing.T) {
name: "bond0", name: "bond0",
config: configMethodManual{}, config: configMethodManual{},
children: []networkInterface{}, children: []networkInterface{},
configDepth: 1, configDepth: 0,
}, },
[]string{"eth0"}, []string{"eth0"},
map[string]string{},
} }
eth0 := &physicalInterface{ eth0 := &physicalInterface{
logicalInterface{ logicalInterface{
name: "eth0", name: "eth0",
config: configMethodManual{}, config: configMethodManual{},
children: []networkInterface{bond0}, children: []networkInterface{bond0},
configDepth: 0, configDepth: 1,
}, },
} }
expect := []InterfaceGenerator{bond0, eth0} expect := []InterfaceGenerator{bond0, eth0}
@@ -289,7 +208,7 @@ func TestBuildInterfacesBlindVLAN(t *testing.T) {
name: "vlan0", name: "vlan0",
config: configMethodManual{}, config: configMethodManual{},
children: []networkInterface{}, children: []networkInterface{},
configDepth: 1, configDepth: 0,
}, },
0, 0,
"eth0", "eth0",
@@ -299,7 +218,7 @@ func TestBuildInterfacesBlindVLAN(t *testing.T) {
name: "eth0", name: "eth0",
config: configMethodManual{}, config: configMethodManual{},
children: []networkInterface{vlan0}, children: []networkInterface{vlan0},
configDepth: 0, configDepth: 1,
}, },
} }
expect := []InterfaceGenerator{eth0, vlan0} expect := []InterfaceGenerator{eth0, vlan0}
@@ -323,7 +242,9 @@ func TestBuildInterfaces(t *testing.T) {
auto: false, auto: false,
configMethod: configMethodManual{}, configMethod: configMethodManual{},
options: map[string][]string{ options: map[string][]string{
"slaves": []string{"eth0"}, "bond-slaves": []string{"eth0"},
"bond-mode": []string{"4"},
"bond-miimon": []string{"100"},
}, },
}, },
&stanzaInterface{ &stanzaInterface{
@@ -332,7 +253,7 @@ func TestBuildInterfaces(t *testing.T) {
auto: false, auto: false,
configMethod: configMethodManual{}, configMethod: configMethodManual{},
options: map[string][]string{ options: map[string][]string{
"slaves": []string{"bond0"}, "bond-slaves": []string{"bond0"},
}, },
}, },
&stanzaInterface{ &stanzaInterface{
@@ -362,7 +283,7 @@ func TestBuildInterfaces(t *testing.T) {
name: "vlan1", name: "vlan1",
config: configMethodManual{}, config: configMethodManual{},
children: []networkInterface{}, children: []networkInterface{},
configDepth: 2, configDepth: 0,
}, },
1, 1,
"bond0", "bond0",
@@ -372,7 +293,7 @@ func TestBuildInterfaces(t *testing.T) {
name: "vlan0", name: "vlan0",
config: configMethodManual{}, config: configMethodManual{},
children: []networkInterface{}, children: []networkInterface{},
configDepth: 1, configDepth: 0,
}, },
0, 0,
"eth0", "eth0",
@@ -382,9 +303,10 @@ func TestBuildInterfaces(t *testing.T) {
name: "bond1", name: "bond1",
config: configMethodManual{}, config: configMethodManual{},
children: []networkInterface{}, children: []networkInterface{},
configDepth: 2, configDepth: 0,
}, },
[]string{"bond0"}, []string{"bond0"},
map[string]string{},
} }
bond0 := &bondInterface{ bond0 := &bondInterface{
logicalInterface{ logicalInterface{
@@ -394,16 +316,20 @@ func TestBuildInterfaces(t *testing.T) {
configDepth: 1, configDepth: 1,
}, },
[]string{"eth0"}, []string{"eth0"},
map[string]string{
"mode": "4",
"miimon": "100",
},
} }
eth0 := &physicalInterface{ eth0 := &physicalInterface{
logicalInterface{ logicalInterface{
name: "eth0", name: "eth0",
config: configMethodManual{}, config: configMethodManual{},
children: []networkInterface{bond0, vlan0}, children: []networkInterface{bond0, vlan0},
configDepth: 0, configDepth: 2,
}, },
} }
expect := []InterfaceGenerator{eth0, bond0, bond1, vlan0, vlan1} expect := []InterfaceGenerator{bond0, bond1, eth0, vlan0, vlan1}
if !reflect.DeepEqual(interfaces, expect) { if !reflect.DeepEqual(interfaces, expect) {
t.FailNow() t.FailNow()
} }
@@ -418,6 +344,8 @@ func TestFilename(t *testing.T) {
{logicalInterface{name: "iface", configDepth: 9}, "09-iface"}, {logicalInterface{name: "iface", configDepth: 9}, "09-iface"},
{logicalInterface{name: "iface", configDepth: 10}, "0a-iface"}, {logicalInterface{name: "iface", configDepth: 10}, "0a-iface"},
{logicalInterface{name: "iface", configDepth: 53}, "35-iface"}, {logicalInterface{name: "iface", configDepth: 53}, "35-iface"},
{logicalInterface{hwaddr: net.HardwareAddr([]byte{0x01, 0x23, 0x45, 0x67, 0x89, 0xab}), configDepth: 1}, "01-01:23:45:67:89:ab"},
{logicalInterface{name: "iface", hwaddr: net.HardwareAddr([]byte{0x01, 0x23, 0x45, 0x67, 0x89, 0xab}), configDepth: 1}, "01-iface"},
} { } {
if tt.i.Filename() != tt.f { if tt.i.Filename() != tt.f {
t.Fatalf("bad filename (%q): got %q, want %q", tt.i, tt.i.Filename(), tt.f) t.Fatalf("bad filename (%q): got %q, want %q", tt.i, tt.i.Filename(), tt.f)

View File

@@ -37,7 +37,7 @@ type route struct {
type configMethod interface{} type configMethod interface{}
type configMethodStatic struct { type configMethodStatic struct {
address net.IPNet addresses []net.IPNet
nameservers []net.IP nameservers []net.IP
routes []route routes []route
hwaddress net.HardwareAddr hwaddress net.HardwareAddr
@@ -193,20 +193,21 @@ func parseInterfaceStanza(attributes []string, options []string) (*stanzaInterfa
switch confMethod { switch confMethod {
case "static": case "static":
config := configMethodStatic{ config := configMethodStatic{
addresses: make([]net.IPNet, 1),
routes: make([]route, 0), routes: make([]route, 0),
nameservers: make([]net.IP, 0), nameservers: make([]net.IP, 0),
} }
if addresses, ok := optionMap["address"]; ok { if addresses, ok := optionMap["address"]; ok {
if len(addresses) == 1 { if len(addresses) == 1 {
config.address.IP = net.ParseIP(addresses[0]) config.addresses[0].IP = net.ParseIP(addresses[0])
} }
} }
if netmasks, ok := optionMap["netmask"]; ok { if netmasks, ok := optionMap["netmask"]; ok {
if len(netmasks) == 1 { if len(netmasks) == 1 {
config.address.Mask = net.IPMask(net.ParseIP(netmasks[0]).To4()) config.addresses[0].Mask = net.IPMask(net.ParseIP(netmasks[0]).To4())
} }
} }
if config.address.IP == nil || config.address.Mask == nil { if config.addresses[0].IP == nil || config.addresses[0].Mask == nil {
return nil, fmt.Errorf("malformed static network config for %q", iface) return nil, fmt.Errorf("malformed static network config for %q", iface)
} }
if gateways, ok := optionMap["gateway"]; ok { if gateways, ok := optionMap["gateway"]; ok {
@@ -293,7 +294,6 @@ func parseHwaddress(options map[string][]string, iface string) (net.HardwareAddr
} }
func parseBondStanza(iface string, conf configMethod, attributes []string, options map[string][]string) (*stanzaInterface, error) { func parseBondStanza(iface string, conf configMethod, attributes []string, options map[string][]string) (*stanzaInterface, error) {
options["slaves"] = options["bond-slaves"]
return &stanzaInterface{name: iface, kind: interfaceBond, configMethod: conf, options: options}, nil return &stanzaInterface{name: iface, kind: interfaceBond, configMethod: conf, options: options}, nil
} }

View File

@@ -129,7 +129,7 @@ func TestParseBondStanzaNoSlaves(t *testing.T) {
if err != nil { if err != nil {
t.FailNow() t.FailNow()
} }
if bond.options["slaves"] != nil { if bond.options["bond-slaves"] != nil {
t.FailNow() t.FailNow()
} }
} }
@@ -152,9 +152,6 @@ func TestParseBondStanza(t *testing.T) {
if bond.configMethod != conf { if bond.configMethod != conf {
t.FailNow() t.FailNow()
} }
if !reflect.DeepEqual(bond.options["slaves"], options["bond-slaves"]) {
t.FailNow()
}
} }
func TestParsePhysicalStanza(t *testing.T) { func TestParsePhysicalStanza(t *testing.T) {
@@ -197,9 +194,11 @@ func TestParseVLANStanzas(t *testing.T) {
func TestParseInterfaceStanzaStaticAddress(t *testing.T) { func TestParseInterfaceStanzaStaticAddress(t *testing.T) {
options := []string{"address 192.168.1.100", "netmask 255.255.255.0"} options := []string{"address 192.168.1.100", "netmask 255.255.255.0"}
expect := net.IPNet{ expect := []net.IPNet{
{
IP: net.IPv4(192, 168, 1, 100), IP: net.IPv4(192, 168, 1, 100),
Mask: net.IPv4Mask(255, 255, 255, 0), Mask: net.IPv4Mask(255, 255, 255, 0),
},
} }
iface, err := parseInterfaceStanza([]string{"eth", "inet", "static"}, options) iface, err := parseInterfaceStanza([]string{"eth", "inet", "static"}, options)
@@ -210,7 +209,7 @@ func TestParseInterfaceStanzaStaticAddress(t *testing.T) {
if !ok { if !ok {
t.FailNow() t.FailNow()
} }
if !reflect.DeepEqual(static.address, expect) { if !reflect.DeepEqual(static.addresses, expect) {
t.FailNow() t.FailNow()
} }
} }

View File

@@ -57,6 +57,11 @@ type HttpClient struct {
client *http.Client client *http.Client
} }
type Getter interface {
Get(string) ([]byte, error)
GetRetry(string) ([]byte, error)
}
func NewHttpClient() *HttpClient { func NewHttpClient() *HttpClient {
hc := &HttpClient{ hc := &HttpClient{
MaxBackoff: time.Second * 5, MaxBackoff: time.Second * 5,

View File

@@ -7,6 +7,7 @@ import (
"os" "os"
"path" "path"
"regexp" "regexp"
"sort"
) )
type EnvFile struct { type EnvFile struct {
@@ -24,7 +25,7 @@ var lineLexer = regexp.MustCompile(`(?m)^((?:([a-zA-Z0-9_]+)=)?.*?)\r?\n`)
// mergeEnvContents: Update the existing file contents with new values, // mergeEnvContents: Update the existing file contents with new values,
// preserving variable ordering and all content this code doesn't understand. // preserving variable ordering and all content this code doesn't understand.
// All new values are appended to the bottom of the old. // All new values are appended to the bottom of the old, sorted by key.
func mergeEnvContents(old []byte, pending map[string]string) []byte { func mergeEnvContents(old []byte, pending map[string]string) []byte {
var buf bytes.Buffer var buf bytes.Buffer
var match [][]byte var match [][]byte
@@ -44,7 +45,8 @@ func mergeEnvContents(old []byte, pending map[string]string) []byte {
} }
} }
for key, value := range pending { for _, key := range keys(pending) {
value := pending[key]
fmt.Fprintf(&buf, "%s=%s\n", key, value) fmt.Fprintf(&buf, "%s=%s\n", key, value)
} }
@@ -87,3 +89,12 @@ func WriteEnvFile(ef *EnvFile, root string) error {
_, err = WriteFile(ef.File, root) _, err = WriteFile(ef.File, root)
return err return err
} }
// keys returns the keys of a map in sorted order
func keys(m map[string]string) (s []string) {
for k, _ := range m {
s = append(s, k)
}
sort.Strings(s)
return
}

View File

@@ -44,7 +44,7 @@ func TestWriteEnvFileUpdate(t *testing.T) {
oldStat, err := os.Stat(fullPath) oldStat, err := os.Stat(fullPath)
if err != nil { if err != nil {
t.Fatal("Unable to stat file: %v", err) t.Fatalf("Unable to stat file: %v", err)
} }
ef := EnvFile{ ef := EnvFile{
@@ -70,11 +70,11 @@ func TestWriteEnvFileUpdate(t *testing.T) {
newStat, err := os.Stat(fullPath) newStat, err := os.Stat(fullPath)
if err != nil { if err != nil {
t.Fatal("Unable to stat file: %v", err) t.Fatalf("Unable to stat file: %v", err)
} }
if oldStat.Sys().(*syscall.Stat_t).Ino == newStat.Sys().(*syscall.Stat_t).Ino { if oldStat.Sys().(*syscall.Stat_t).Ino == newStat.Sys().(*syscall.Stat_t).Ino {
t.Fatal("File was not replaced: %s", fullPath) t.Fatalf("File was not replaced: %s", fullPath)
} }
} }
@@ -91,7 +91,7 @@ func TestWriteEnvFileUpdateNoNewline(t *testing.T) {
oldStat, err := os.Stat(fullPath) oldStat, err := os.Stat(fullPath)
if err != nil { if err != nil {
t.Fatal("Unable to stat file: %v", err) t.Fatalf("Unable to stat file: %v", err)
} }
ef := EnvFile{ ef := EnvFile{
@@ -117,11 +117,11 @@ func TestWriteEnvFileUpdateNoNewline(t *testing.T) {
newStat, err := os.Stat(fullPath) newStat, err := os.Stat(fullPath)
if err != nil { if err != nil {
t.Fatal("Unable to stat file: %v", err) t.Fatalf("Unable to stat file: %v", err)
} }
if oldStat.Sys().(*syscall.Stat_t).Ino == newStat.Sys().(*syscall.Stat_t).Ino { if oldStat.Sys().(*syscall.Stat_t).Ino == newStat.Sys().(*syscall.Stat_t).Ino {
t.Fatal("File was not replaced: %s", fullPath) t.Fatalf("File was not replaced: %s", fullPath)
} }
} }
@@ -170,7 +170,7 @@ func TestWriteEnvFileNoop(t *testing.T) {
oldStat, err := os.Stat(fullPath) oldStat, err := os.Stat(fullPath)
if err != nil { if err != nil {
t.Fatal("Unable to stat file: %v", err) t.Fatalf("Unable to stat file: %v", err)
} }
ef := EnvFile{ ef := EnvFile{
@@ -196,11 +196,11 @@ func TestWriteEnvFileNoop(t *testing.T) {
newStat, err := os.Stat(fullPath) newStat, err := os.Stat(fullPath)
if err != nil { if err != nil {
t.Fatal("Unable to stat file: %v", err) t.Fatalf("Unable to stat file: %v", err)
} }
if oldStat.Sys().(*syscall.Stat_t).Ino != newStat.Sys().(*syscall.Stat_t).Ino { if oldStat.Sys().(*syscall.Stat_t).Ino != newStat.Sys().(*syscall.Stat_t).Ino {
t.Fatal("File was replaced: %s", fullPath) t.Fatalf("File was replaced: %s", fullPath)
} }
} }
@@ -217,7 +217,7 @@ func TestWriteEnvFileUpdateDos(t *testing.T) {
oldStat, err := os.Stat(fullPath) oldStat, err := os.Stat(fullPath)
if err != nil { if err != nil {
t.Fatal("Unable to stat file: %v", err) t.Fatalf("Unable to stat file: %v", err)
} }
ef := EnvFile{ ef := EnvFile{
@@ -243,11 +243,11 @@ func TestWriteEnvFileUpdateDos(t *testing.T) {
newStat, err := os.Stat(fullPath) newStat, err := os.Stat(fullPath)
if err != nil { if err != nil {
t.Fatal("Unable to stat file: %v", err) t.Fatalf("Unable to stat file: %v", err)
} }
if oldStat.Sys().(*syscall.Stat_t).Ino == newStat.Sys().(*syscall.Stat_t).Ino { if oldStat.Sys().(*syscall.Stat_t).Ino == newStat.Sys().(*syscall.Stat_t).Ino {
t.Fatal("File was not replaced: %s", fullPath) t.Fatalf("File was not replaced: %s", fullPath)
} }
} }
@@ -266,7 +266,7 @@ func TestWriteEnvFileDos2Unix(t *testing.T) {
oldStat, err := os.Stat(fullPath) oldStat, err := os.Stat(fullPath)
if err != nil { if err != nil {
t.Fatal("Unable to stat file: %v", err) t.Fatalf("Unable to stat file: %v", err)
} }
ef := EnvFile{ ef := EnvFile{
@@ -292,11 +292,11 @@ func TestWriteEnvFileDos2Unix(t *testing.T) {
newStat, err := os.Stat(fullPath) newStat, err := os.Stat(fullPath)
if err != nil { if err != nil {
t.Fatal("Unable to stat file: %v", err) t.Fatalf("Unable to stat file: %v", err)
} }
if oldStat.Sys().(*syscall.Stat_t).Ino == newStat.Sys().(*syscall.Stat_t).Ino { if oldStat.Sys().(*syscall.Stat_t).Ino == newStat.Sys().(*syscall.Stat_t).Ino {
t.Fatal("File was not replaced: %s", fullPath) t.Fatalf("File was not replaced: %s", fullPath)
} }
} }
@@ -314,7 +314,7 @@ func TestWriteEnvFileEmpty(t *testing.T) {
oldStat, err := os.Stat(fullPath) oldStat, err := os.Stat(fullPath)
if err != nil { if err != nil {
t.Fatal("Unable to stat file: %v", err) t.Fatalf("Unable to stat file: %v", err)
} }
ef := EnvFile{ ef := EnvFile{
@@ -340,11 +340,11 @@ func TestWriteEnvFileEmpty(t *testing.T) {
newStat, err := os.Stat(fullPath) newStat, err := os.Stat(fullPath)
if err != nil { if err != nil {
t.Fatal("Unable to stat file: %v", err) t.Fatalf("Unable to stat file: %v", err)
} }
if oldStat.Sys().(*syscall.Stat_t).Ino != newStat.Sys().(*syscall.Stat_t).Ino { if oldStat.Sys().(*syscall.Stat_t).Ino != newStat.Sys().(*syscall.Stat_t).Ino {
t.Fatal("File was replaced: %s", fullPath) t.Fatalf("File was replaced: %s", fullPath)
} }
} }

View File

@@ -2,10 +2,11 @@ package system
import ( import (
"fmt" "fmt"
"io/ioutil" "log"
"net" "net"
"os/exec" "os/exec"
"path" "strings"
"time"
"github.com/coreos/coreos-cloudinit/network" "github.com/coreos/coreos-cloudinit/network"
"github.com/coreos/coreos-cloudinit/third_party/github.com/dotcloud/docker/pkg/netlink" "github.com/coreos/coreos-cloudinit/third_party/github.com/dotcloud/docker/pkg/netlink"
@@ -17,6 +18,13 @@ const (
func RestartNetwork(interfaces []network.InterfaceGenerator) (err error) { func RestartNetwork(interfaces []network.InterfaceGenerator) (err error) {
defer func() { defer func() {
if e := restartNetworkd(); e != nil {
err = e
return
}
// TODO(crawford): Get rid of this once networkd fixes the race
// https://bugs.freedesktop.org/show_bug.cgi?id=76077
time.Sleep(5 * time.Second)
if e := restartNetworkd(); e != nil { if e := restartNetworkd(); e != nil {
err = e err = e
} }
@@ -26,19 +34,18 @@ func RestartNetwork(interfaces []network.InterfaceGenerator) (err error) {
return return
} }
if err = probe8012q(); err != nil { if err = maybeProbe8012q(interfaces); err != nil {
return return
} }
return return maybeProbeBonding(interfaces)
} }
func downNetworkInterfaces(interfaces []network.InterfaceGenerator) error { func downNetworkInterfaces(interfaces []network.InterfaceGenerator) error {
sysInterfaceMap := make(map[string]*net.Interface) sysInterfaceMap := make(map[string]*net.Interface)
if systemInterfaces, err := net.Interfaces(); err == nil { if systemInterfaces, err := net.Interfaces(); err == nil {
for _, iface := range systemInterfaces { for _, iface := range systemInterfaces {
// Need a copy of the interface so we can take the address iface := iface
temp := iface sysInterfaceMap[iface.Name] = &iface
sysInterfaceMap[temp.Name] = &temp
} }
} else { } else {
return err return err
@@ -46,6 +53,7 @@ func downNetworkInterfaces(interfaces []network.InterfaceGenerator) error {
for _, iface := range interfaces { for _, iface := range interfaces {
if systemInterface, ok := sysInterfaceMap[iface.Name()]; ok { if systemInterface, ok := sysInterfaceMap[iface.Name()]; ok {
log.Printf("Taking down interface %q\n", systemInterface.Name)
if err := netlink.NetworkLinkDown(systemInterface); err != nil { if err := netlink.NetworkLinkDown(systemInterface); err != nil {
fmt.Printf("Error while downing interface %q (%s). Continuing...\n", systemInterface.Name, err) fmt.Printf("Error while downing interface %q (%s). Continuing...\n", systemInterface.Name, err)
} }
@@ -55,26 +63,44 @@ func downNetworkInterfaces(interfaces []network.InterfaceGenerator) error {
return nil return nil
} }
func probe8012q() error { func maybeProbe8012q(interfaces []network.InterfaceGenerator) error {
for _, iface := range interfaces {
if iface.Type() == "vlan" {
log.Printf("Probing LKM %q (%q)\n", "8021q", "8021q")
return exec.Command("modprobe", "8021q").Run() return exec.Command("modprobe", "8021q").Run()
}
}
return nil
}
func maybeProbeBonding(interfaces []network.InterfaceGenerator) error {
for _, iface := range interfaces {
if iface.Type() == "bond" {
args := append([]string{"bonding"}, strings.Split(iface.ModprobeParams(), " ")...)
log.Printf("Probing LKM %q (%q)\n", "bonding", args)
return exec.Command("modprobe", args...).Run()
}
}
return nil
} }
func restartNetworkd() error { func restartNetworkd() error {
_, err := RunUnitCommand("restart", "systemd-networkd.service") log.Printf("Restarting networkd.service\n")
_, err := NewUnitManager("").RunUnitCommand("restart", "systemd-networkd.service")
return err return err
} }
func WriteNetworkdConfigs(interfaces []network.InterfaceGenerator) error { func WriteNetworkdConfigs(interfaces []network.InterfaceGenerator) error {
for _, iface := range interfaces { for _, iface := range interfaces {
filename := path.Join(runtimeNetworkPath, fmt.Sprintf("%s.netdev", iface.Filename())) filename := fmt.Sprintf("%s.netdev", iface.Filename())
if err := writeConfig(filename, iface.Netdev()); err != nil { if err := writeConfig(filename, iface.Netdev()); err != nil {
return err return err
} }
filename = path.Join(runtimeNetworkPath, fmt.Sprintf("%s.link", iface.Filename())) filename = fmt.Sprintf("%s.link", iface.Filename())
if err := writeConfig(filename, iface.Link()); err != nil { if err := writeConfig(filename, iface.Link()); err != nil {
return err return err
} }
filename = path.Join(runtimeNetworkPath, fmt.Sprintf("%s.network", iface.Filename())) filename = fmt.Sprintf("%s.network", iface.Filename())
if err := writeConfig(filename, iface.Network()); err != nil { if err := writeConfig(filename, iface.Network()); err != nil {
return err return err
} }
@@ -86,6 +112,7 @@ func writeConfig(filename string, config string) error {
if config == "" { if config == "" {
return nil return nil
} }
log.Printf("Writing networkd unit %q\n", filename)
return ioutil.WriteFile(filename, []byte(config), 0444) _, err := WriteFile(&File{Content: config, Path: filename}, runtimeNetworkPath)
return err
} }

View File

@@ -13,63 +13,21 @@ import (
"github.com/coreos/coreos-cloudinit/third_party/github.com/coreos/go-systemd/dbus" "github.com/coreos/coreos-cloudinit/third_party/github.com/coreos/go-systemd/dbus"
) )
func NewUnitManager(root string) UnitManager {
return &systemd{root}
}
type systemd struct {
root string
}
// fakeMachineID is placed on non-usr CoreOS images and should // fakeMachineID is placed on non-usr CoreOS images and should
// never be used as a true MachineID // never be used as a true MachineID
const fakeMachineID = "42000000000000000000000000000042" const fakeMachineID = "42000000000000000000000000000042"
// Name for drop-in service configuration files created by cloudconfig
const cloudConfigDropIn = "20-cloudinit.conf"
type Unit struct {
Name string
Mask bool
Enable bool
Runtime bool
Content string
Command string
// For drop-in units, a cloudinit.conf is generated.
// This is currently unbound in YAML (and hence unsettable in cloud-config files)
// until the correct behaviour for multiple drop-in units is determined.
DropIn bool `yaml:"-"`
}
func (u *Unit) Type() string {
ext := filepath.Ext(u.Name)
return strings.TrimLeft(ext, ".")
}
func (u *Unit) Group() (group string) {
t := u.Type()
if t == "network" || t == "netdev" || t == "link" {
group = "network"
} else {
group = "system"
}
return
}
type Script []byte
// Destination builds the appropriate absolute file path for
// the Unit. The root argument indicates the effective base
// directory of the system (similar to a chroot).
func (u *Unit) Destination(root string) string {
dir := "etc"
if u.Runtime {
dir = "run"
}
if u.DropIn {
return path.Join(root, dir, "systemd", u.Group(), fmt.Sprintf("%s.d", u.Name), cloudConfigDropIn)
} else {
return path.Join(root, dir, "systemd", u.Group(), u.Name)
}
}
// PlaceUnit writes a unit file at the provided destination, creating // PlaceUnit writes a unit file at the provided destination, creating
// parent directories as necessary. // parent directories as necessary.
func PlaceUnit(u *Unit, dst string) error { func (s *systemd) PlaceUnit(u *Unit, dst string) error {
dir := filepath.Dir(dst) dir := filepath.Dir(dst)
if _, err := os.Stat(dir); os.IsNotExist(err) { if _, err := os.Stat(dir); os.IsNotExist(err) {
if err := os.MkdirAll(dir, os.FileMode(0755)); err != nil { if err := os.MkdirAll(dir, os.FileMode(0755)); err != nil {
@@ -91,7 +49,7 @@ func PlaceUnit(u *Unit, dst string) error {
return nil return nil
} }
func EnableUnitFile(unit string, runtime bool) error { func (s *systemd) EnableUnitFile(unit string, runtime bool) error {
conn, err := dbus.New() conn, err := dbus.New()
if err != nil { if err != nil {
return err return err
@@ -102,7 +60,7 @@ func EnableUnitFile(unit string, runtime bool) error {
return err return err
} }
func RunUnitCommand(command, unit string) (string, error) { func (s *systemd) RunUnitCommand(command, unit string) (string, error) {
conn, err := dbus.New() conn, err := dbus.New()
if err != nil { if err != nil {
return "", err return "", err
@@ -131,7 +89,7 @@ func RunUnitCommand(command, unit string) (string, error) {
return fn(unit, "replace") return fn(unit, "replace")
} }
func DaemonReload() error { func (s *systemd) DaemonReload() error {
conn, err := dbus.New() conn, err := dbus.New()
if err != nil { if err != nil {
return err return err
@@ -140,6 +98,57 @@ func DaemonReload() error {
return conn.Reload() return conn.Reload()
} }
// MaskUnit masks the given Unit by symlinking its unit file to
// /dev/null, analogous to `systemctl mask`.
// N.B.: Unlike `systemctl mask`, this function will *remove any existing unit
// file at the location*, to ensure that the mask will succeed.
func (s *systemd) MaskUnit(unit *Unit) error {
masked := unit.Destination(s.root)
if _, err := os.Stat(masked); os.IsNotExist(err) {
if err := os.MkdirAll(path.Dir(masked), os.FileMode(0755)); err != nil {
return err
}
} else if err := os.Remove(masked); err != nil {
return err
}
return os.Symlink("/dev/null", masked)
}
// UnmaskUnit is analogous to systemd's unit_file_unmask. If the file
// associated with the given Unit is empty or appears to be a symlink to
// /dev/null, it is removed.
func (s *systemd) UnmaskUnit(unit *Unit) error {
masked := unit.Destination(s.root)
ne, err := nullOrEmpty(masked)
if os.IsNotExist(err) {
return nil
} else if err != nil {
return err
}
if !ne {
log.Printf("%s is not null or empty, refusing to unmask", masked)
return nil
}
return os.Remove(masked)
}
// nullOrEmpty checks whether a given path appears to be an empty regular file
// or a symlink to /dev/null
func nullOrEmpty(path string) (bool, error) {
fi, err := os.Stat(path)
if err != nil {
return false, err
}
m := fi.Mode()
if m.IsRegular() && fi.Size() <= 0 {
return true, nil
}
if m&os.ModeCharDevice > 0 {
return true, nil
}
return false, nil
}
func ExecuteScript(scriptPath string) (string, error) { func ExecuteScript(scriptPath string) (string, error) {
props := []dbus.Property{ props := []dbus.Property{
dbus.PropDescription("Unit generated and executed by coreos-cloudinit on behalf of user"), dbus.PropDescription("Unit generated and executed by coreos-cloudinit on behalf of user"),
@@ -178,54 +187,3 @@ func MachineID(root string) string {
return id return id
} }
// MaskUnit masks the given Unit by symlinking its unit file to
// /dev/null, analogous to `systemctl mask`.
// N.B.: Unlike `systemctl mask`, this function will *remove any existing unit
// file at the location*, to ensure that the mask will succeed.
func MaskUnit(unit *Unit, root string) error {
masked := unit.Destination(root)
if _, err := os.Stat(masked); os.IsNotExist(err) {
if err := os.MkdirAll(path.Dir(masked), os.FileMode(0755)); err != nil {
return err
}
} else if err := os.Remove(masked); err != nil {
return err
}
return os.Symlink("/dev/null", masked)
}
// UnmaskUnit is analogous to systemd's unit_file_unmask. If the file
// associated with the given Unit is empty or appears to be a symlink to
// /dev/null, it is removed.
func UnmaskUnit(unit *Unit, root string) error {
masked := unit.Destination(root)
ne, err := nullOrEmpty(masked)
if os.IsNotExist(err) {
return nil
} else if err != nil {
return err
}
if !ne {
log.Printf("%s is not null or empty, refusing to unmask", masked)
return nil
}
return os.Remove(masked)
}
// nullOrEmpty checks whether a given path appears to be an empty regular file
// or a symlink to /dev/null
func nullOrEmpty(path string) (bool, error) {
fi, err := os.Stat(path)
if err != nil {
return false, err
}
m := fi.Mode()
if m.IsRegular() && fi.Size() <= 0 {
return true, nil
}
if m&os.ModeCharDevice > 0 {
return true, nil
}
return false, nil
}

View File

@@ -25,13 +25,15 @@ Address=10.209.171.177/19
} }
defer os.RemoveAll(dir) defer os.RemoveAll(dir)
sd := &systemd{dir}
dst := u.Destination(dir) dst := u.Destination(dir)
expectDst := path.Join(dir, "run", "systemd", "network", "50-eth0.network") expectDst := path.Join(dir, "run", "systemd", "network", "50-eth0.network")
if dst != expectDst { if dst != expectDst {
t.Fatalf("unit.Destination returned %s, expected %s", dst, expectDst) t.Fatalf("unit.Destination returned %s, expected %s", dst, expectDst)
} }
if err := PlaceUnit(&u, dst); err != nil { if err := sd.PlaceUnit(&u, dst); err != nil {
t.Fatalf("PlaceUnit failed: %v", err) t.Fatalf("PlaceUnit failed: %v", err)
} }
@@ -100,13 +102,15 @@ Where=/media/state
} }
defer os.RemoveAll(dir) defer os.RemoveAll(dir)
sd := &systemd{dir}
dst := u.Destination(dir) dst := u.Destination(dir)
expectDst := path.Join(dir, "etc", "systemd", "system", "media-state.mount") expectDst := path.Join(dir, "etc", "systemd", "system", "media-state.mount")
if dst != expectDst { if dst != expectDst {
t.Fatalf("unit.Destination returned %s, expected %s", dst, expectDst) t.Fatalf("unit.Destination returned %s, expected %s", dst, expectDst)
} }
if err := PlaceUnit(&u, dst); err != nil { if err := sd.PlaceUnit(&u, dst); err != nil {
t.Fatalf("PlaceUnit failed: %v", err) t.Fatalf("PlaceUnit failed: %v", err)
} }
@@ -155,9 +159,11 @@ func TestMaskUnit(t *testing.T) {
} }
defer os.RemoveAll(dir) defer os.RemoveAll(dir)
sd := &systemd{dir}
// Ensure mask works with units that do not currently exist // Ensure mask works with units that do not currently exist
uf := &Unit{Name: "foo.service"} uf := &Unit{Name: "foo.service"}
if err := MaskUnit(uf, dir); err != nil { if err := sd.MaskUnit(uf); err != nil {
t.Fatalf("Unable to mask new unit: %v", err) t.Fatalf("Unable to mask new unit: %v", err)
} }
fooPath := path.Join(dir, "etc", "systemd", "system", "foo.service") fooPath := path.Join(dir, "etc", "systemd", "system", "foo.service")
@@ -175,7 +181,7 @@ func TestMaskUnit(t *testing.T) {
if _, err := os.Create(barPath); err != nil { if _, err := os.Create(barPath); err != nil {
t.Fatalf("Error creating new unit file: %v", err) t.Fatalf("Error creating new unit file: %v", err)
} }
if err := MaskUnit(ub, dir); err != nil { if err := sd.MaskUnit(ub); err != nil {
t.Fatalf("Unable to mask existing unit: %v", err) t.Fatalf("Unable to mask existing unit: %v", err)
} }
barTgt, err := os.Readlink(barPath) barTgt, err := os.Readlink(barPath)
@@ -194,8 +200,10 @@ func TestUnmaskUnit(t *testing.T) {
} }
defer os.RemoveAll(dir) defer os.RemoveAll(dir)
sd := &systemd{dir}
nilUnit := &Unit{Name: "null.service"} nilUnit := &Unit{Name: "null.service"}
if err := UnmaskUnit(nilUnit, dir); err != nil { if err := sd.UnmaskUnit(nilUnit); err != nil {
t.Errorf("unexpected error from unmasking nonexistent unit: %v", err) t.Errorf("unexpected error from unmasking nonexistent unit: %v", err)
} }
@@ -211,7 +219,7 @@ func TestUnmaskUnit(t *testing.T) {
if err := ioutil.WriteFile(dst, []byte(uf.Content), 700); err != nil { if err := ioutil.WriteFile(dst, []byte(uf.Content), 700); err != nil {
t.Fatalf("Unable to write unit file: %v", err) t.Fatalf("Unable to write unit file: %v", err)
} }
if err := UnmaskUnit(uf, dir); err != nil { if err := sd.UnmaskUnit(uf); err != nil {
t.Errorf("unmask of non-empty unit returned unexpected error: %v", err) t.Errorf("unmask of non-empty unit returned unexpected error: %v", err)
} }
got, _ := ioutil.ReadFile(dst) got, _ := ioutil.ReadFile(dst)
@@ -224,7 +232,7 @@ func TestUnmaskUnit(t *testing.T) {
if err := os.Symlink("/dev/null", dst); err != nil { if err := os.Symlink("/dev/null", dst); err != nil {
t.Fatalf("Unable to create masked unit: %v", err) t.Fatalf("Unable to create masked unit: %v", err)
} }
if err := UnmaskUnit(ub, dir); err != nil { if err := sd.UnmaskUnit(ub); err != nil {
t.Errorf("unmask of unit returned unexpected error: %v", err) t.Errorf("unmask of unit returned unexpected error: %v", err)
} }
if _, err := os.Stat(dst); !os.IsNotExist(err) { if _, err := os.Stat(dst); !os.IsNotExist(err) {

67
system/unit.go Normal file
View File

@@ -0,0 +1,67 @@
package system
import (
"fmt"
"path"
"path/filepath"
"strings"
)
// Name for drop-in service configuration files created by cloudconfig
const cloudConfigDropIn = "20-cloudinit.conf"
type UnitManager interface {
PlaceUnit(unit *Unit, dst string) error
EnableUnitFile(unit string, runtime bool) error
RunUnitCommand(command, unit string) (string, error)
DaemonReload() error
MaskUnit(unit *Unit) error
UnmaskUnit(unit *Unit) error
}
type Unit struct {
Name string
Mask bool
Enable bool
Runtime bool
Content string
Command string
// For drop-in units, a cloudinit.conf is generated.
// This is currently unbound in YAML (and hence unsettable in cloud-config files)
// until the correct behaviour for multiple drop-in units is determined.
DropIn bool `yaml:"-"`
}
func (u *Unit) Type() string {
ext := filepath.Ext(u.Name)
return strings.TrimLeft(ext, ".")
}
func (u *Unit) Group() (group string) {
t := u.Type()
if t == "network" || t == "netdev" || t == "link" {
group = "network"
} else {
group = "system"
}
return
}
type Script []byte
// Destination builds the appropriate absolute file path for
// the Unit. The root argument indicates the effective base
// directory of the system (similar to a chroot).
func (u *Unit) Destination(root string) string {
dir := "etc"
if u.Runtime {
dir = "run"
}
if u.DropIn {
return path.Join(root, dir, "systemd", u.Group(), fmt.Sprintf("%s.d", u.Name), cloudConfigDropIn)
} else {
return path.Join(root, dir, "systemd", u.Group(), u.Name)
}
}

22
test
View File

@@ -13,12 +13,24 @@ COVER=${COVER:-"-cover"}
source ./build source ./build
declare -a TESTPKGS=(initialize system datasource pkg network) declare -a TESTPKGS=(initialize
system
datasource
datasource/configdrive
datasource/file
datasource/metadata
datasource/metadata/cloudsigma
datasource/metadata/digitalocean
datasource/metadata/ec2
datasource/proc_cmdline
datasource/url
pkg
network)
if [ -z "$PKG" ]; then if [ -z "$PKG" ]; then
GOFMTPATH="$TESTPKGS coreos-cloudinit.go" GOFMTPATH="${TESTPKGS[*]} coreos-cloudinit.go"
# prepend repo path to each package # prepend repo path to each package
TESTPKGS="${TESTPKGS[@]/#/${REPO_PATH}/} ./" TESTPKGS="${TESTPKGS[*]/#/${REPO_PATH}/} ./"
else else
GOFMTPATH="$TESTPKGS" GOFMTPATH="$TESTPKGS"
# strip out slashes and dots from PKG=./foo/ # strip out slashes and dots from PKG=./foo/
@@ -33,5 +45,9 @@ go test ${COVER} $@ ${TESTPKGS}
echo "Checking gofmt..." echo "Checking gofmt..."
fmtRes=$(gofmt -l $GOFMTPATH) fmtRes=$(gofmt -l $GOFMTPATH)
if [ -n "$fmtRes" ]; then
echo "$fmtRes"
exit 1
fi
echo "Success" echo "Success"

View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,43 @@
cepgo
=====
Cepko implements easy-to-use communication with CloudSigma's VMs through a
virtual serial port without bothering with formatting the messages properly nor
parsing the output with the specific and sometimes confusing shell tools for
that purpose.
Having the server definition accessible by the VM can be useful in various
ways. For example it is possible to easily determine from within the VM, which
network interfaces are connected to public and which to private network.
Another use is to pass some data to initial VM setup scripts, like setting the
hostname to the VM name or passing ssh public keys through server meta.
Example usage:
package main
import (
"fmt"
"github.com/cloudsigma/cepgo"
)
func main() {
c := cepgo.NewCepgo()
result, err := c.Meta()
if err != nil {
panic(err)
}
fmt.Printf("%#v", result)
}
Output:
map[string]interface {}{
"optimize_for":"custom",
"ssh_public_key":"ssh-rsa AAA...",
"description":"[...]",
}
For more information take a look at the Server Context section of CloudSigma
API Docs: http://cloudsigma-docs.readthedocs.org/en/latest/server_context.html

View File

@@ -0,0 +1,186 @@
// Cepko implements easy-to-use communication with CloudSigma's VMs through a
// virtual serial port without bothering with formatting the messages properly
// nor parsing the output with the specific and sometimes confusing shell tools
// for that purpose.
//
// Having the server definition accessible by the VM can be useful in various
// ways. For example it is possible to easily determine from within the VM,
// which network interfaces are connected to public and which to private
// network. Another use is to pass some data to initial VM setup scripts, like
// setting the hostname to the VM name or passing ssh public keys through
// server meta.
//
// Example usage:
//
// package main
//
// import (
// "fmt"
//
// "github.com/cloudsigma/cepgo"
// )
//
// func main() {
// c := cepgo.NewCepgo()
// result, err := c.Meta()
// if err != nil {
// panic(err)
// }
// fmt.Printf("%#v", result)
// }
//
// Output:
//
// map[string]string{
// "optimize_for":"custom",
// "ssh_public_key":"ssh-rsa AAA...",
// "description":"[...]",
// }
//
// For more information take a look at the Server Context section API Docs:
// http://cloudsigma-docs.readthedocs.org/en/latest/server_context.html
package cepgo
import (
"bufio"
"encoding/json"
"errors"
"fmt"
"runtime"
"github.com/coreos/coreos-cloudinit/third_party/github.com/tarm/goserial"
)
const (
requestPattern = "<\n%s\n>"
EOT = '\x04' // End Of Transmission
)
var (
SerialPort string = "/dev/ttyS1"
Baud int = 115200
)
// Sets the serial port. If the operating system is windows CloudSigma's server
// context is at COM2 port, otherwise (linux, freebsd, darwin) the port is
// being left to the default /dev/ttyS1.
func init() {
if runtime.GOOS == "windows" {
SerialPort = "COM2"
}
}
// The default fetcher makes the connection to the serial port,
// writes given query and reads until the EOT symbol.
func fetchViaSerialPort(key string) ([]byte, error) {
config := &serial.Config{Name: SerialPort, Baud: Baud}
connection, err := serial.OpenPort(config)
if err != nil {
return nil, err
}
query := fmt.Sprintf(requestPattern, key)
if _, err := connection.Write([]byte(query)); err != nil {
return nil, err
}
reader := bufio.NewReader(connection)
answer, err := reader.ReadBytes(EOT)
if err != nil {
return nil, err
}
return answer[0 : len(answer)-1], nil
}
// Queries to the serial port can be executed only from instance of this type.
// The result from each of them can be either interface{}, map[string]string or
// a single in case of single value is returned. There is also a public metod
// who directly calls the fetcher and returns raw []byte from the serial port.
type Cepgo struct {
fetcher func(string) ([]byte, error)
}
// Creates a Cepgo instance with the default serial port fetcher.
func NewCepgo() *Cepgo {
cepgo := new(Cepgo)
cepgo.fetcher = fetchViaSerialPort
return cepgo
}
// Creates a Cepgo instance with custom fetcher.
func NewCepgoFetcher(fetcher func(string) ([]byte, error)) *Cepgo {
cepgo := new(Cepgo)
cepgo.fetcher = fetcher
return cepgo
}
// Fetches raw []byte from the serial port using directly the fetcher member.
func (c *Cepgo) FetchRaw(key string) ([]byte, error) {
return c.fetcher(key)
}
// Fetches a single key and tries to unmarshal the result to json and returns
// it. If the unmarshalling fails it's safe to assume the result it's just a
// string and returns it.
func (c *Cepgo) Key(key string) (interface{}, error) {
var result interface{}
fetched, err := c.FetchRaw(key)
if err != nil {
return nil, err
}
err = json.Unmarshal(fetched, &result)
if err != nil {
return string(fetched), nil
}
return result, nil
}
// Fetches all the server context. Equivalent of c.Key("")
func (c *Cepgo) All() (interface{}, error) {
return c.Key("")
}
// Fetches only the object meta field and makes sure to return a proper
// map[string]string
func (c *Cepgo) Meta() (map[string]string, error) {
rawMeta, err := c.Key("/meta/")
if err != nil {
return nil, err
}
return typeAssertToMapOfStrings(rawMeta)
}
// Fetches only the global context and makes sure to return a proper
// map[string]string
func (c *Cepgo) GlobalContext() (map[string]string, error) {
rawContext, err := c.Key("/global_context/")
if err != nil {
return nil, err
}
return typeAssertToMapOfStrings(rawContext)
}
// Just a little helper function that uses type assertions in order to convert
// a interface{} to map[string]string if this is possible.
func typeAssertToMapOfStrings(raw interface{}) (map[string]string, error) {
result := make(map[string]string)
dictionary, ok := raw.(map[string]interface{})
if !ok {
return nil, errors.New("Received bytes are formatted badly")
}
for key, rawValue := range dictionary {
if value, ok := rawValue.(string); ok {
result[key] = value
} else {
return nil, errors.New("Server context metadata is formatted badly")
}
}
return result, nil
}

View File

@@ -0,0 +1,122 @@
package cepgo
import (
"encoding/json"
"testing"
)
func fetchMock(key string) ([]byte, error) {
context := []byte(`{
"context": true,
"cpu": 4000,
"cpu_model": null,
"cpus_instead_of_cores": false,
"enable_numa": false,
"global_context": {
"some_global_key": "some_global_val"
},
"grantees": [],
"hv_relaxed": false,
"hv_tsc": false,
"jobs": [],
"mem": 4294967296,
"meta": {
"base64_fields": "cloudinit-user-data",
"cloudinit-user-data": "I2Nsb3VkLWNvbmZpZwoKaG9zdG5hbWU6IGNvcmVvczE=",
"ssh_public_key": "ssh-rsa AAAAB2NzaC1yc2E.../hQ5D5 john@doe"
},
"name": "coreos",
"nics": [
{
"runtime": {
"interface_type": "public",
"ip_v4": {
"uuid": "31.171.251.74"
},
"ip_v6": null
},
"vlan": null
}
],
"smp": 2,
"status": "running",
"uuid": "20a0059b-041e-4d0c-bcc6-9b2852de48b3"
}`)
if key == "" {
return context, nil
}
var marshalledContext map[string]interface{}
err := json.Unmarshal(context, &marshalledContext)
if err != nil {
return nil, err
}
if key[0] == '/' {
key = key[1:]
}
if key[len(key)-1] == '/' {
key = key[:len(key)-1]
}
return json.Marshal(marshalledContext[key])
}
func TestAll(t *testing.T) {
cepgo := NewCepgoFetcher(fetchMock)
result, err := cepgo.All()
if err != nil {
t.Error(err)
}
for _, key := range []string{"meta", "name", "uuid", "global_context"} {
if _, ok := result.(map[string]interface{})[key]; !ok {
t.Errorf("%s not in all keys", key)
}
}
}
func TestKey(t *testing.T) {
cepgo := NewCepgoFetcher(fetchMock)
result, err := cepgo.Key("uuid")
if err != nil {
t.Error(err)
}
if _, ok := result.(string); !ok {
t.Errorf("%#v\n", result)
t.Error("Fetching the uuid did not return a string")
}
}
func TestMeta(t *testing.T) {
cepgo := NewCepgoFetcher(fetchMock)
meta, err := cepgo.Meta()
if err != nil {
t.Errorf("%#v\n", meta)
t.Error(err)
}
if _, ok := meta["ssh_public_key"]; !ok {
t.Error("ssh_public_key is not in the meta")
}
}
func TestGlobalContext(t *testing.T) {
cepgo := NewCepgoFetcher(fetchMock)
result, err := cepgo.GlobalContext()
if err != nil {
t.Error(err)
}
if _, ok := result["some_global_key"]; !ok {
t.Error("some_global_key is not in the global context")
}
}

View File

@@ -0,0 +1,27 @@
Copyright (c) 2009 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -0,0 +1,63 @@
GoSerial
========
A simple go package to allow you to read and write from the
serial port as a stream of bytes.
Details
-------
It aims to have the same API on all platforms, including windows. As
an added bonus, the windows package does not use cgo, so you can cross
compile for windows from another platform. Unfortunately goinstall
does not currently let you cross compile so you will have to do it
manually:
GOOS=windows make clean install
Currently there is very little in the way of configurability. You can
set the baud rate. Then you can Read(), Write(), or Close() the
connection. Read() will block until at least one byte is returned.
Write is the same. There is currently no exposed way to set the
timeouts, though patches are welcome.
Currently all ports are opened with 8 data bits, 1 stop bit, no
parity, no hardware flow control, and no software flow control. This
works fine for many real devices and many faux serial devices
including usb-to-serial converters and bluetooth serial ports.
You may Read() and Write() simulantiously on the same connection (from
different goroutines).
Usage
-----
```go
package main
import (
"github.com/tarm/goserial"
"log"
)
func main() {
c := &serial.Config{Name: "COM45", Baud: 115200}
s, err := serial.OpenPort(c)
if err != nil {
log.Fatal(err)
}
n, err := s.Write([]byte("test"))
if err != nil {
log.Fatal(err)
}
buf := make([]byte, 128)
n, err = s.Read(buf)
if err != nil {
log.Fatal(err)
}
log.Print("%q", buf[:n])
}
```
Possible Future Work
--------------------
- better tests (loopback etc)

View File

@@ -0,0 +1,39 @@
package serial
import (
"testing"
)
func TestConnection(t *testing.T) {
if testing.Short() {
return
}
c0 := &Config{Name: "COM5", Baud: 115200}
/*
c1 := new(Config)
c1.Name = "COM5"
c1.Baud = 115200
*/
s, err := OpenPort(c0)
if err != nil {
t.Fatal(err)
}
_, err = s.Write([]byte("test"))
if err != nil {
t.Fatal(err)
}
buf := make([]byte, 128)
_, err = s.Read(buf)
if err != nil {
t.Fatal(err)
}
}
// BUG(tarmigan): Add loopback test
func TestLoopback(t *testing.T) {
}

View File

@@ -0,0 +1,99 @@
/*
Goserial is a simple go package to allow you to read and write from
the serial port as a stream of bytes.
It aims to have the same API on all platforms, including windows. As
an added bonus, the windows package does not use cgo, so you can cross
compile for windows from another platform. Unfortunately goinstall
does not currently let you cross compile so you will have to do it
manually:
GOOS=windows make clean install
Currently there is very little in the way of configurability. You can
set the baud rate. Then you can Read(), Write(), or Close() the
connection. Read() will block until at least one byte is returned.
Write is the same. There is currently no exposed way to set the
timeouts, though patches are welcome.
Currently all ports are opened with 8 data bits, 1 stop bit, no
parity, no hardware flow control, and no software flow control. This
works fine for many real devices and many faux serial devices
including usb-to-serial converters and bluetooth serial ports.
You may Read() and Write() simulantiously on the same connection (from
different goroutines).
Example usage:
package main
import (
"github.com/tarm/goserial"
"log"
)
func main() {
c := &serial.Config{Name: "COM5", Baud: 115200}
s, err := serial.OpenPort(c)
if err != nil {
log.Fatal(err)
}
n, err := s.Write([]byte("test"))
if err != nil {
log.Fatal(err)
}
buf := make([]byte, 128)
n, err = s.Read(buf)
if err != nil {
log.Fatal(err)
}
log.Print("%q", buf[:n])
}
*/
package serial
import "io"
// Config contains the information needed to open a serial port.
//
// Currently few options are implemented, but more may be added in the
// future (patches welcome), so it is recommended that you create a
// new config addressing the fields by name rather than by order.
//
// For example:
//
// c0 := &serial.Config{Name: "COM45", Baud: 115200}
// or
// c1 := new(serial.Config)
// c1.Name = "/dev/tty.usbserial"
// c1.Baud = 115200
//
type Config struct {
Name string
Baud int
// Size int // 0 get translated to 8
// Parity SomeNewTypeToGetCorrectDefaultOf_None
// StopBits SomeNewTypeToGetCorrectDefaultOf_1
// RTSFlowControl bool
// DTRFlowControl bool
// XONFlowControl bool
// CRLFTranslate bool
// TimeoutStuff int
}
// OpenPort opens a serial port with the specified configuration
func OpenPort(c *Config) (io.ReadWriteCloser, error) {
return openPort(c.Name, c.Baud)
}
// func Flush()
// func SendBreak()
// func RegisterBreakHandler(func())

View File

@@ -0,0 +1,90 @@
// +build linux,!cgo
package serial
import (
"io"
"os"
"syscall"
"unsafe"
)
func openPort(name string, baud int) (rwc io.ReadWriteCloser, err error) {
var bauds = map[int]uint32{
50: syscall.B50,
75: syscall.B75,
110: syscall.B110,
134: syscall.B134,
150: syscall.B150,
200: syscall.B200,
300: syscall.B300,
600: syscall.B600,
1200: syscall.B1200,
1800: syscall.B1800,
2400: syscall.B2400,
4800: syscall.B4800,
9600: syscall.B9600,
19200: syscall.B19200,
38400: syscall.B38400,
57600: syscall.B57600,
115200: syscall.B115200,
230400: syscall.B230400,
460800: syscall.B460800,
500000: syscall.B500000,
576000: syscall.B576000,
921600: syscall.B921600,
1000000: syscall.B1000000,
1152000: syscall.B1152000,
1500000: syscall.B1500000,
2000000: syscall.B2000000,
2500000: syscall.B2500000,
3000000: syscall.B3000000,
3500000: syscall.B3500000,
4000000: syscall.B4000000,
}
rate := bauds[baud]
if rate == 0 {
return
}
f, err := os.OpenFile(name, syscall.O_RDWR|syscall.O_NOCTTY|syscall.O_NONBLOCK, 0666)
if err != nil {
return nil, err
}
defer func() {
if err != nil && f != nil {
f.Close()
}
}()
fd := f.Fd()
t := syscall.Termios{
Iflag: syscall.IGNPAR,
Cflag: syscall.CS8 | syscall.CREAD | syscall.CLOCAL | rate,
Cc: [32]uint8{syscall.VMIN: 1},
Ispeed: rate,
Ospeed: rate,
}
if _, _, errno := syscall.Syscall6(
syscall.SYS_IOCTL,
uintptr(fd),
uintptr(syscall.TCSETS),
uintptr(unsafe.Pointer(&t)),
0,
0,
0,
); errno != 0 {
return nil, errno
}
if err = syscall.SetNonblock(int(fd), false); err != nil {
return
}
return f, nil
}

View File

@@ -0,0 +1,107 @@
// +build !windows,cgo
package serial
// #include <termios.h>
// #include <unistd.h>
import "C"
// TODO: Maybe change to using syscall package + ioctl instead of cgo
import (
"errors"
"fmt"
"io"
"os"
"syscall"
//"unsafe"
)
func openPort(name string, baud int) (rwc io.ReadWriteCloser, err error) {
f, err := os.OpenFile(name, syscall.O_RDWR|syscall.O_NOCTTY|syscall.O_NONBLOCK, 0666)
if err != nil {
return
}
fd := C.int(f.Fd())
if C.isatty(fd) != 1 {
f.Close()
return nil, errors.New("File is not a tty")
}
var st C.struct_termios
_, err = C.tcgetattr(fd, &st)
if err != nil {
f.Close()
return nil, err
}
var speed C.speed_t
switch baud {
case 115200:
speed = C.B115200
case 57600:
speed = C.B57600
case 38400:
speed = C.B38400
case 19200:
speed = C.B19200
case 9600:
speed = C.B9600
case 4800:
speed = C.B4800
case 2400:
speed = C.B2400
default:
f.Close()
return nil, fmt.Errorf("Unknown baud rate %v", baud)
}
_, err = C.cfsetispeed(&st, speed)
if err != nil {
f.Close()
return nil, err
}
_, err = C.cfsetospeed(&st, speed)
if err != nil {
f.Close()
return nil, err
}
// Select local mode
st.c_cflag |= (C.CLOCAL | C.CREAD)
// Select raw mode
st.c_lflag &= ^C.tcflag_t(C.ICANON | C.ECHO | C.ECHOE | C.ISIG)
st.c_oflag &= ^C.tcflag_t(C.OPOST)
_, err = C.tcsetattr(fd, C.TCSANOW, &st)
if err != nil {
f.Close()
return nil, err
}
//fmt.Println("Tweaking", name)
r1, _, e := syscall.Syscall(syscall.SYS_FCNTL,
uintptr(f.Fd()),
uintptr(syscall.F_SETFL),
uintptr(0))
if e != 0 || r1 != 0 {
s := fmt.Sprint("Clearing NONBLOCK syscall error:", e, r1)
f.Close()
return nil, errors.New(s)
}
/*
r1, _, e = syscall.Syscall(syscall.SYS_IOCTL,
uintptr(f.Fd()),
uintptr(0x80045402), // IOSSIOSPEED
uintptr(unsafe.Pointer(&baud)));
if e != 0 || r1 != 0 {
s := fmt.Sprint("Baudrate syscall error:", e, r1)
f.Close()
return nil, os.NewError(s)
}
*/
return f, nil
}

View File

@@ -0,0 +1,263 @@
// +build windows
package serial
import (
"fmt"
"io"
"os"
"sync"
"syscall"
"unsafe"
)
type serialPort struct {
f *os.File
fd syscall.Handle
rl sync.Mutex
wl sync.Mutex
ro *syscall.Overlapped
wo *syscall.Overlapped
}
type structDCB struct {
DCBlength, BaudRate uint32
flags [4]byte
wReserved, XonLim, XoffLim uint16
ByteSize, Parity, StopBits byte
XonChar, XoffChar, ErrorChar, EofChar, EvtChar byte
wReserved1 uint16
}
type structTimeouts struct {
ReadIntervalTimeout uint32
ReadTotalTimeoutMultiplier uint32
ReadTotalTimeoutConstant uint32
WriteTotalTimeoutMultiplier uint32
WriteTotalTimeoutConstant uint32
}
func openPort(name string, baud int) (rwc io.ReadWriteCloser, err error) {
if len(name) > 0 && name[0] != '\\' {
name = "\\\\.\\" + name
}
h, err := syscall.CreateFile(syscall.StringToUTF16Ptr(name),
syscall.GENERIC_READ|syscall.GENERIC_WRITE,
0,
nil,
syscall.OPEN_EXISTING,
syscall.FILE_ATTRIBUTE_NORMAL|syscall.FILE_FLAG_OVERLAPPED,
0)
if err != nil {
return nil, err
}
f := os.NewFile(uintptr(h), name)
defer func() {
if err != nil {
f.Close()
}
}()
if err = setCommState(h, baud); err != nil {
return
}
if err = setupComm(h, 64, 64); err != nil {
return
}
if err = setCommTimeouts(h); err != nil {
return
}
if err = setCommMask(h); err != nil {
return
}
ro, err := newOverlapped()
if err != nil {
return
}
wo, err := newOverlapped()
if err != nil {
return
}
port := new(serialPort)
port.f = f
port.fd = h
port.ro = ro
port.wo = wo
return port, nil
}
func (p *serialPort) Close() error {
return p.f.Close()
}
func (p *serialPort) Write(buf []byte) (int, error) {
p.wl.Lock()
defer p.wl.Unlock()
if err := resetEvent(p.wo.HEvent); err != nil {
return 0, err
}
var n uint32
err := syscall.WriteFile(p.fd, buf, &n, p.wo)
if err != nil && err != syscall.ERROR_IO_PENDING {
return int(n), err
}
return getOverlappedResult(p.fd, p.wo)
}
func (p *serialPort) Read(buf []byte) (int, error) {
if p == nil || p.f == nil {
return 0, fmt.Errorf("Invalid port on read %v %v", p, p.f)
}
p.rl.Lock()
defer p.rl.Unlock()
if err := resetEvent(p.ro.HEvent); err != nil {
return 0, err
}
var done uint32
err := syscall.ReadFile(p.fd, buf, &done, p.ro)
if err != nil && err != syscall.ERROR_IO_PENDING {
return int(done), err
}
return getOverlappedResult(p.fd, p.ro)
}
var (
nSetCommState,
nSetCommTimeouts,
nSetCommMask,
nSetupComm,
nGetOverlappedResult,
nCreateEvent,
nResetEvent uintptr
)
func init() {
k32, err := syscall.LoadLibrary("kernel32.dll")
if err != nil {
panic("LoadLibrary " + err.Error())
}
defer syscall.FreeLibrary(k32)
nSetCommState = getProcAddr(k32, "SetCommState")
nSetCommTimeouts = getProcAddr(k32, "SetCommTimeouts")
nSetCommMask = getProcAddr(k32, "SetCommMask")
nSetupComm = getProcAddr(k32, "SetupComm")
nGetOverlappedResult = getProcAddr(k32, "GetOverlappedResult")
nCreateEvent = getProcAddr(k32, "CreateEventW")
nResetEvent = getProcAddr(k32, "ResetEvent")
}
func getProcAddr(lib syscall.Handle, name string) uintptr {
addr, err := syscall.GetProcAddress(lib, name)
if err != nil {
panic(name + " " + err.Error())
}
return addr
}
func setCommState(h syscall.Handle, baud int) error {
var params structDCB
params.DCBlength = uint32(unsafe.Sizeof(params))
params.flags[0] = 0x01 // fBinary
params.flags[0] |= 0x10 // Assert DSR
params.BaudRate = uint32(baud)
params.ByteSize = 8
r, _, err := syscall.Syscall(nSetCommState, 2, uintptr(h), uintptr(unsafe.Pointer(&params)), 0)
if r == 0 {
return err
}
return nil
}
func setCommTimeouts(h syscall.Handle) error {
var timeouts structTimeouts
const MAXDWORD = 1<<32 - 1
timeouts.ReadIntervalTimeout = MAXDWORD
timeouts.ReadTotalTimeoutMultiplier = MAXDWORD
timeouts.ReadTotalTimeoutConstant = MAXDWORD - 1
/* From http://msdn.microsoft.com/en-us/library/aa363190(v=VS.85).aspx
For blocking I/O see below:
Remarks:
If an application sets ReadIntervalTimeout and
ReadTotalTimeoutMultiplier to MAXDWORD and sets
ReadTotalTimeoutConstant to a value greater than zero and
less than MAXDWORD, one of the following occurs when the
ReadFile function is called:
If there are any bytes in the input buffer, ReadFile returns
immediately with the bytes in the buffer.
If there are no bytes in the input buffer, ReadFile waits
until a byte arrives and then returns immediately.
If no bytes arrive within the time specified by
ReadTotalTimeoutConstant, ReadFile times out.
*/
r, _, err := syscall.Syscall(nSetCommTimeouts, 2, uintptr(h), uintptr(unsafe.Pointer(&timeouts)), 0)
if r == 0 {
return err
}
return nil
}
func setupComm(h syscall.Handle, in, out int) error {
r, _, err := syscall.Syscall(nSetupComm, 3, uintptr(h), uintptr(in), uintptr(out))
if r == 0 {
return err
}
return nil
}
func setCommMask(h syscall.Handle) error {
const EV_RXCHAR = 0x0001
r, _, err := syscall.Syscall(nSetCommMask, 2, uintptr(h), EV_RXCHAR, 0)
if r == 0 {
return err
}
return nil
}
func resetEvent(h syscall.Handle) error {
r, _, err := syscall.Syscall(nResetEvent, 1, uintptr(h), 0, 0)
if r == 0 {
return err
}
return nil
}
func newOverlapped() (*syscall.Overlapped, error) {
var overlapped syscall.Overlapped
r, _, err := syscall.Syscall6(nCreateEvent, 4, 0, 1, 0, 0, 0, 0)
if r == 0 {
return nil, err
}
overlapped.HEvent = syscall.Handle(r)
return &overlapped, nil
}
func getOverlappedResult(h syscall.Handle, overlapped *syscall.Overlapped) (int, error) {
var n int
r, _, err := syscall.Syscall6(nGetOverlappedResult, 4,
uintptr(h),
uintptr(unsafe.Pointer(overlapped)),
uintptr(unsafe.Pointer(&n)), 1, 0, 0)
if r == 0 {
return n, err
}
return n, nil
}

View File

@@ -1,3 +1,15 @@
The following files were ported to Go from C files of libyaml, and thus
are still covered by their original copyright and license:
apic.go
emitterc.go
parserc.go
readerc.go
scannerc.go
writerc.go
yamlh.go
yamlprivateh.go
Copyright (c) 2006 Kirill Simonov Copyright (c) 2006 Kirill Simonov
Permission is hereby granted, free of charge, to any person obtaining a copy of Permission is hereby granted, free of charge, to any person obtaining a copy of

128
third_party/gopkg.in/yaml.v1/README.md vendored Normal file
View File

@@ -0,0 +1,128 @@
# YAML support for the Go language
Introduction
------------
The yaml package enables Go programs to comfortably encode and decode YAML
values. It was developed within [Canonical](https://www.canonical.com) as
part of the [juju](https://juju.ubuntu.com) project, and is based on a
pure Go port of the well-known [libyaml](http://pyyaml.org/wiki/LibYAML)
C library to parse and generate YAML data quickly and reliably.
Compatibility
-------------
The yaml package is almost compatible with YAML 1.1, including support for
anchors, tags, etc. There are still a few missing bits, such as document
merging, base-60 floats (huh?), and multi-document unmarshalling. These
features are not hard to add, and will be introduced as necessary.
Installation and usage
----------------------
The import path for the package is *gopkg.in/yaml.v1*.
To install it, run:
go get gopkg.in/yaml.v1
API documentation
-----------------
If opened in a browser, the import path itself leads to the API documentation:
* [https://gopkg.in/yaml.v1](https://gopkg.in/yaml.v1)
API stability
-------------
The package API for yaml v1 will remain stable as described in [gopkg.in](https://gopkg.in).
License
-------
The yaml package is licensed under the LGPL with an exception that allows it to be linked statically. Please see the LICENSE file for details.
Example
-------
```Go
package main
import (
"fmt"
"log"
"gopkg.in/yaml.v1"
)
var data = `
a: Easy!
b:
c: 2
d: [3, 4]
`
type T struct {
A string
B struct{C int; D []int ",flow"}
}
func main() {
t := T{}
err := yaml.Unmarshal([]byte(data), &t)
if err != nil {
log.Fatalf("error: %v", err)
}
fmt.Printf("--- t:\n%v\n\n", t)
d, err := yaml.Marshal(&t)
if err != nil {
log.Fatalf("error: %v", err)
}
fmt.Printf("--- t dump:\n%s\n\n", string(d))
m := make(map[interface{}]interface{})
err = yaml.Unmarshal([]byte(data), &m)
if err != nil {
log.Fatalf("error: %v", err)
}
fmt.Printf("--- m:\n%v\n\n", m)
d, err = yaml.Marshal(&m)
if err != nil {
log.Fatalf("error: %v", err)
}
fmt.Printf("--- m dump:\n%s\n\n", string(d))
}
```
This example will generate the following output:
```
--- t:
{Easy! {2 [3 4]}}
--- t dump:
a: Easy!
b:
c: 2
d: [3, 4]
--- m:
map[a:Easy! b:map[c:2 d:[3 4]]]
--- m dump:
a: Easy!
b:
c: 2
d:
- 3
- 4
```

View File

@@ -1,4 +1,4 @@
package goyaml package yaml
import ( import (
"io" "io"

View File

@@ -1,8 +1,9 @@
package goyaml package yaml
import ( import (
"reflect" "reflect"
"strconv" "strconv"
"time"
) )
const ( const (
@@ -211,6 +212,16 @@ func newDecoder() *decoder {
// returned to call SetYAML() with the value of *out once it's defined. // returned to call SetYAML() with the value of *out once it's defined.
// //
func (d *decoder) setter(tag string, out *reflect.Value, good *bool) (set func()) { func (d *decoder) setter(tag string, out *reflect.Value, good *bool) (set func()) {
if (*out).Kind() != reflect.Ptr && (*out).CanAddr() {
setter, _ := (*out).Addr().Interface().(Setter)
if setter != nil {
var arg interface{}
*out = reflect.ValueOf(&arg).Elem()
return func() {
*good = setter.SetYAML(tag, arg)
}
}
}
again := true again := true
for again { for again {
again = false again = false
@@ -279,17 +290,20 @@ func (d *decoder) alias(n *node, out reflect.Value) (good bool) {
return good return good
} }
var durationType = reflect.TypeOf(time.Duration(0))
func (d *decoder) scalar(n *node, out reflect.Value) (good bool) { func (d *decoder) scalar(n *node, out reflect.Value) (good bool) {
var tag string var tag string
var resolved interface{} var resolved interface{}
if n.tag == "" && !n.implicit { if n.tag == "" && !n.implicit {
tag = "!!str"
resolved = n.value resolved = n.value
} else { } else {
tag, resolved = resolve(n.tag, n.value) tag, resolved = resolve(n.tag, n.value)
}
if set := d.setter(tag, &out, &good); set != nil { if set := d.setter(tag, &out, &good); set != nil {
defer set() defer set()
} }
}
switch out.Kind() { switch out.Kind() {
case reflect.String: case reflect.String:
if resolved != nil { if resolved != nil {
@@ -320,6 +334,14 @@ func (d *decoder) scalar(n *node, out reflect.Value) (good bool) {
out.SetInt(int64(resolved)) out.SetInt(int64(resolved))
good = true good = true
} }
case string:
if out.Type() == durationType {
d, err := time.ParseDuration(resolved)
if err == nil {
out.SetInt(int64(d))
good = true
}
}
} }
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
switch resolved := resolved.(type) { switch resolved := resolved.(type) {
@@ -437,6 +459,10 @@ func (d *decoder) mapping(n *node, out reflect.Value) (good bool) {
} }
l := len(n.children) l := len(n.children)
for i := 0; i < l; i += 2 { for i := 0; i < l; i += 2 {
if isMerge(n.children[i]) {
d.merge(n.children[i+1], out)
continue
}
k := reflect.New(kt).Elem() k := reflect.New(kt).Elem()
if d.unmarshal(n.children[i], k) { if d.unmarshal(n.children[i], k) {
e := reflect.New(et).Elem() e := reflect.New(et).Elem()
@@ -456,7 +482,12 @@ func (d *decoder) mappingStruct(n *node, out reflect.Value) (good bool) {
name := settableValueOf("") name := settableValueOf("")
l := len(n.children) l := len(n.children)
for i := 0; i < l; i += 2 { for i := 0; i < l; i += 2 {
if !d.unmarshal(n.children[i], name) { ni := n.children[i]
if isMerge(ni) {
d.merge(n.children[i+1], out)
continue
}
if !d.unmarshal(ni, name) {
continue continue
} }
if info, ok := sinfo.FieldsMap[name.String()]; ok { if info, ok := sinfo.FieldsMap[name.String()]; ok {
@@ -471,3 +502,37 @@ func (d *decoder) mappingStruct(n *node, out reflect.Value) (good bool) {
} }
return true return true
} }
func (d *decoder) merge(n *node, out reflect.Value) {
const wantMap = "map merge requires map or sequence of maps as the value"
switch n.kind {
case mappingNode:
d.unmarshal(n, out)
case aliasNode:
an, ok := d.doc.anchors[n.value]
if ok && an.kind != mappingNode {
panic(wantMap)
}
d.unmarshal(n, out)
case sequenceNode:
// Step backwards as earlier nodes take precedence.
for i := len(n.children)-1; i >= 0; i-- {
ni := n.children[i]
if ni.kind == aliasNode {
an, ok := d.doc.anchors[ni.value]
if ok && an.kind != mappingNode {
panic(wantMap)
}
} else if ni.kind != mappingNode {
panic(wantMap)
}
d.unmarshal(ni, out)
}
default:
panic(wantMap)
}
}
func isMerge(n *node) bool {
return n.kind == scalarNode && n.value == "<<" && (n.implicit == true || n.tag == "!!merge" || n.tag == "tag:yaml.org,2002:merge")
}

View File

@@ -1,10 +1,11 @@
package goyaml_test package yaml_test
import ( import (
. "launchpad.net/gocheck" . "gopkg.in/check.v1"
"github.com/coreos/coreos-cloudinit/third_party/launchpad.net/goyaml" "gopkg.in/yaml.v1"
"math" "math"
"reflect" "reflect"
"time"
) )
var unmarshalIntTest = 123 var unmarshalIntTest = 123
@@ -350,6 +351,32 @@ var unmarshalTests = []struct {
C inlineB `yaml:",inline"` C inlineB `yaml:",inline"`
}{1, inlineB{2, inlineC{3}}}, }{1, inlineB{2, inlineC{3}}},
}, },
// bug 1243827
{
"a: -b_c",
map[string]interface{}{"a": "-b_c"},
},
{
"a: +b_c",
map[string]interface{}{"a": "+b_c"},
},
{
"a: 50cent_of_dollar",
map[string]interface{}{"a": "50cent_of_dollar"},
},
// Duration
{
"a: 3s",
map[string]time.Duration{"a": 3 * time.Second},
},
// Issue #24.
{
"a: <foo>",
map[string]string{"a": "<foo>"},
},
} }
type inlineB struct { type inlineB struct {
@@ -377,7 +404,7 @@ func (s *S) TestUnmarshal(c *C) {
pv := reflect.New(pt.Elem()) pv := reflect.New(pt.Elem())
value = pv.Interface() value = pv.Interface()
} }
err := goyaml.Unmarshal([]byte(item.data), value) err := yaml.Unmarshal([]byte(item.data), value)
c.Assert(err, IsNil, Commentf("Item #%d", i)) c.Assert(err, IsNil, Commentf("Item #%d", i))
if t.Kind() == reflect.String { if t.Kind() == reflect.String {
c.Assert(*value.(*string), Equals, item.value, Commentf("Item #%d", i)) c.Assert(*value.(*string), Equals, item.value, Commentf("Item #%d", i))
@@ -389,7 +416,7 @@ func (s *S) TestUnmarshal(c *C) {
func (s *S) TestUnmarshalNaN(c *C) { func (s *S) TestUnmarshalNaN(c *C) {
value := map[string]interface{}{} value := map[string]interface{}{}
err := goyaml.Unmarshal([]byte("notanum: .NaN"), &value) err := yaml.Unmarshal([]byte("notanum: .NaN"), &value)
c.Assert(err, IsNil) c.Assert(err, IsNil)
c.Assert(math.IsNaN(value["notanum"].(float64)), Equals, true) c.Assert(math.IsNaN(value["notanum"].(float64)), Equals, true)
} }
@@ -408,7 +435,7 @@ var unmarshalErrorTests = []struct {
func (s *S) TestUnmarshalErrors(c *C) { func (s *S) TestUnmarshalErrors(c *C) {
for _, item := range unmarshalErrorTests { for _, item := range unmarshalErrorTests {
var value interface{} var value interface{}
err := goyaml.Unmarshal([]byte(item.data), &value) err := yaml.Unmarshal([]byte(item.data), &value)
c.Assert(err, ErrorMatches, item.error, Commentf("Partial unmarshal: %#v", value)) c.Assert(err, ErrorMatches, item.error, Commentf("Partial unmarshal: %#v", value))
} }
} }
@@ -421,6 +448,8 @@ var setterTests = []struct {
{"_: [1,A]", "!!seq", []interface{}{1, "A"}}, {"_: [1,A]", "!!seq", []interface{}{1, "A"}},
{"_: 10", "!!int", 10}, {"_: 10", "!!int", 10},
{"_: null", "!!null", nil}, {"_: null", "!!null", nil},
{`_: BAR!`, "!!str", "BAR!"},
{`_: "BAR!"`, "!!str", "BAR!"},
{"_: !!foo 'BAR!'", "!!foo", "BAR!"}, {"_: !!foo 'BAR!'", "!!foo", "BAR!"},
} }
@@ -442,17 +471,31 @@ func (o *typeWithSetter) SetYAML(tag string, value interface{}) (ok bool) {
return true return true
} }
type typeWithSetterField struct { type setterPointerType struct {
Field *typeWithSetter "_" Field *typeWithSetter "_"
} }
func (s *S) TestUnmarshalWithSetter(c *C) { type setterValueType struct {
Field typeWithSetter "_"
}
func (s *S) TestUnmarshalWithPointerSetter(c *C) {
for _, item := range setterTests { for _, item := range setterTests {
obj := &typeWithSetterField{} obj := &setterPointerType{}
err := goyaml.Unmarshal([]byte(item.data), obj) err := yaml.Unmarshal([]byte(item.data), obj)
c.Assert(err, IsNil) c.Assert(err, IsNil)
c.Assert(obj.Field, NotNil, c.Assert(obj.Field, NotNil, Commentf("Pointer not initialized (%#v)", item.value))
Commentf("Pointer not initialized (%#v)", item.value)) c.Assert(obj.Field.tag, Equals, item.tag)
c.Assert(obj.Field.value, DeepEquals, item.value)
}
}
func (s *S) TestUnmarshalWithValueSetter(c *C) {
for _, item := range setterTests {
obj := &setterValueType{}
err := yaml.Unmarshal([]byte(item.data), obj)
c.Assert(err, IsNil)
c.Assert(obj.Field, NotNil, Commentf("Pointer not initialized (%#v)", item.value))
c.Assert(obj.Field.tag, Equals, item.tag) c.Assert(obj.Field.tag, Equals, item.tag)
c.Assert(obj.Field.value, DeepEquals, item.value) c.Assert(obj.Field.value, DeepEquals, item.value)
} }
@@ -460,7 +503,7 @@ func (s *S) TestUnmarshalWithSetter(c *C) {
func (s *S) TestUnmarshalWholeDocumentWithSetter(c *C) { func (s *S) TestUnmarshalWholeDocumentWithSetter(c *C) {
obj := &typeWithSetter{} obj := &typeWithSetter{}
err := goyaml.Unmarshal([]byte(setterTests[0].data), obj) err := yaml.Unmarshal([]byte(setterTests[0].data), obj)
c.Assert(err, IsNil) c.Assert(err, IsNil)
c.Assert(obj.tag, Equals, setterTests[0].tag) c.Assert(obj.tag, Equals, setterTests[0].tag)
value, ok := obj.value.(map[interface{}]interface{}) value, ok := obj.value.(map[interface{}]interface{})
@@ -477,8 +520,8 @@ func (s *S) TestUnmarshalWithFalseSetterIgnoresValue(c *C) {
}() }()
m := map[string]*typeWithSetter{} m := map[string]*typeWithSetter{}
data := "{abc: 1, def: 2, ghi: 3, jkl: 4}" data := `{abc: 1, def: 2, ghi: 3, jkl: 4}`
err := goyaml.Unmarshal([]byte(data), m) err := yaml.Unmarshal([]byte(data), m)
c.Assert(err, IsNil) c.Assert(err, IsNil)
c.Assert(m["abc"], NotNil) c.Assert(m["abc"], NotNil)
c.Assert(m["def"], IsNil) c.Assert(m["def"], IsNil)
@@ -489,6 +532,98 @@ func (s *S) TestUnmarshalWithFalseSetterIgnoresValue(c *C) {
c.Assert(m["ghi"].value, Equals, 3) c.Assert(m["ghi"].value, Equals, 3)
} }
// From http://yaml.org/type/merge.html
var mergeTests = `
anchors:
- &CENTER { "x": 1, "y": 2 }
- &LEFT { "x": 0, "y": 2 }
- &BIG { "r": 10 }
- &SMALL { "r": 1 }
# All the following maps are equal:
plain:
# Explicit keys
"x": 1
"y": 2
"r": 10
label: center/big
mergeOne:
# Merge one map
<< : *CENTER
"r": 10
label: center/big
mergeMultiple:
# Merge multiple maps
<< : [ *CENTER, *BIG ]
label: center/big
override:
# Override
<< : [ *BIG, *LEFT, *SMALL ]
"x": 1
label: center/big
shortTag:
# Explicit short merge tag
!!merge "<<" : [ *CENTER, *BIG ]
label: center/big
longTag:
# Explicit merge long tag
!<tag:yaml.org,2002:merge> "<<" : [ *CENTER, *BIG ]
label: center/big
inlineMap:
# Inlined map
<< : {"x": 1, "y": 2, "r": 10}
label: center/big
inlineSequenceMap:
# Inlined map in sequence
<< : [ *CENTER, {"r": 10} ]
label: center/big
`
func (s *S) TestMerge(c *C) {
var want = map[interface{}]interface{}{
"x": 1,
"y": 2,
"r": 10,
"label": "center/big",
}
var m map[string]interface{}
err := yaml.Unmarshal([]byte(mergeTests), &m)
c.Assert(err, IsNil)
for name, test := range m {
if name == "anchors" {
continue
}
c.Assert(test, DeepEquals, want, Commentf("test %q failed", name))
}
}
func (s *S) TestMergeStruct(c *C) {
type Data struct {
X, Y, R int
Label string
}
want := Data{1, 2, 10, "center/big"}
var m map[string]Data
err := yaml.Unmarshal([]byte(mergeTests), &m)
c.Assert(err, IsNil)
for name, test := range m {
if name == "anchors" {
continue
}
c.Assert(test, Equals, want, Commentf("test %q failed", name))
}
}
//var data []byte //var data []byte
//func init() { //func init() {
// var err error // var err error
@@ -502,7 +637,7 @@ func (s *S) TestUnmarshalWithFalseSetterIgnoresValue(c *C) {
// var err error // var err error
// for i := 0; i < c.N; i++ { // for i := 0; i < c.N; i++ {
// var v map[string]interface{} // var v map[string]interface{}
// err = goyaml.Unmarshal(data, &v) // err = yaml.Unmarshal(data, &v)
// } // }
// if err != nil { // if err != nil {
// panic(err) // panic(err)
@@ -511,9 +646,9 @@ func (s *S) TestUnmarshalWithFalseSetterIgnoresValue(c *C) {
// //
//func (s *S) BenchmarkMarshal(c *C) { //func (s *S) BenchmarkMarshal(c *C) {
// var v map[string]interface{} // var v map[string]interface{}
// goyaml.Unmarshal(data, &v) // yaml.Unmarshal(data, &v)
// c.ResetTimer() // c.ResetTimer()
// for i := 0; i < c.N; i++ { // for i := 0; i < c.N; i++ {
// goyaml.Marshal(&v) // yaml.Marshal(&v)
// } // }
//} //}

View File

@@ -1,4 +1,4 @@
package goyaml package yaml
import ( import (
"bytes" "bytes"

View File

@@ -1,9 +1,10 @@
package goyaml package yaml
import ( import (
"reflect" "reflect"
"sort" "sort"
"strconv" "strconv"
"time"
) )
type encoder struct { type encoder struct {
@@ -85,7 +86,11 @@ func (e *encoder) marshal(tag string, in reflect.Value) {
case reflect.String: case reflect.String:
e.stringv(tag, in) e.stringv(tag, in)
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
if in.Type() == durationType {
e.stringv(tag, reflect.ValueOf(in.Interface().(time.Duration).String()))
} else {
e.intv(tag, in) e.intv(tag, in)
}
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
e.uintv(tag, in) e.uintv(tag, in)
case reflect.Float32, reflect.Float64: case reflect.Float32, reflect.Float64:

View File

@@ -1,12 +1,13 @@
package goyaml_test package yaml_test
import ( import (
"fmt" "fmt"
. "launchpad.net/gocheck" "gopkg.in/yaml.v1"
"github.com/coreos/coreos-cloudinit/third_party/launchpad.net/goyaml" . "gopkg.in/check.v1"
"math" "math"
"strconv" "strconv"
"strings" "strings"
"time"
) )
var marshalIntTest = 123 var marshalIntTest = 123
@@ -212,11 +213,23 @@ var marshalTests = []struct {
}{1, inlineB{2, inlineC{3}}}, }{1, inlineB{2, inlineC{3}}},
"a: 1\nb: 2\nc: 3\n", "a: 1\nb: 2\nc: 3\n",
}, },
// Duration
{
map[string]time.Duration{"a": 3 * time.Second},
"a: 3s\n",
},
// Issue #24.
{
map[string]string{"a": "<foo>"},
"a: <foo>\n",
},
} }
func (s *S) TestMarshal(c *C) { func (s *S) TestMarshal(c *C) {
for _, item := range marshalTests { for _, item := range marshalTests {
data, err := goyaml.Marshal(item.value) data, err := yaml.Marshal(item.value)
c.Assert(err, IsNil) c.Assert(err, IsNil)
c.Assert(string(data), Equals, item.data) c.Assert(string(data), Equals, item.data)
} }
@@ -237,7 +250,7 @@ var marshalErrorTests = []struct {
func (s *S) TestMarshalErrors(c *C) { func (s *S) TestMarshalErrors(c *C) {
for _, item := range marshalErrorTests { for _, item := range marshalErrorTests {
_, err := goyaml.Marshal(item.value) _, err := yaml.Marshal(item.value)
c.Assert(err, ErrorMatches, item.error) c.Assert(err, ErrorMatches, item.error)
} }
} }
@@ -269,12 +282,12 @@ func (s *S) TestMarshalTypeCache(c *C) {
var err error var err error
func() { func() {
type T struct{ A int } type T struct{ A int }
data, err = goyaml.Marshal(&T{}) data, err = yaml.Marshal(&T{})
c.Assert(err, IsNil) c.Assert(err, IsNil)
}() }()
func() { func() {
type T struct{ B int } type T struct{ B int }
data, err = goyaml.Marshal(&T{}) data, err = yaml.Marshal(&T{})
c.Assert(err, IsNil) c.Assert(err, IsNil)
}() }()
c.Assert(string(data), Equals, "b: 0\n") c.Assert(string(data), Equals, "b: 0\n")
@@ -298,7 +311,7 @@ func (s *S) TestMashalWithGetter(c *C) {
obj := &typeWithGetterField{} obj := &typeWithGetterField{}
obj.Field.tag = item.tag obj.Field.tag = item.tag
obj.Field.value = item.value obj.Field.value = item.value
data, err := goyaml.Marshal(obj) data, err := yaml.Marshal(obj)
c.Assert(err, IsNil) c.Assert(err, IsNil)
c.Assert(string(data), Equals, string(item.data)) c.Assert(string(data), Equals, string(item.data))
} }
@@ -308,7 +321,7 @@ func (s *S) TestUnmarshalWholeDocumentWithGetter(c *C) {
obj := &typeWithGetter{} obj := &typeWithGetter{}
obj.tag = "" obj.tag = ""
obj.value = map[string]string{"hello": "world!"} obj.value = map[string]string{"hello": "world!"}
data, err := goyaml.Marshal(obj) data, err := yaml.Marshal(obj)
c.Assert(err, IsNil) c.Assert(err, IsNil)
c.Assert(string(data), Equals, "hello: world!\n") c.Assert(string(data), Equals, "hello: world!\n")
} }
@@ -356,7 +369,7 @@ func (s *S) TestSortedOutput(c *C) {
for _, k := range order { for _, k := range order {
m[k] = 1 m[k] = 1
} }
data, err := goyaml.Marshal(m) data, err := yaml.Marshal(m)
c.Assert(err, IsNil) c.Assert(err, IsNil)
out := "\n" + string(data) out := "\n" + string(data)
last := 0 last := 0

View File

@@ -1,4 +1,4 @@
package goyaml package yaml
import ( import (
"bytes" "bytes"

View File

@@ -1,4 +1,4 @@
package goyaml package yaml
import ( import (
"io" "io"

View File

@@ -1,4 +1,4 @@
package goyaml package yaml
import ( import (
"math" "math"
@@ -27,7 +27,6 @@ func init() {
t[int(c)] = 'M' // In map t[int(c)] = 'M' // In map
} }
t[int('.')] = '.' // Float (potentially in map) t[int('.')] = '.' // Float (potentially in map)
t[int('<')] = '<' // Merge
var resolveMapList = []struct { var resolveMapList = []struct {
v interface{} v interface{}
@@ -45,6 +44,7 @@ func init() {
{math.Inf(+1), "!!float", []string{".inf", ".Inf", ".INF"}}, {math.Inf(+1), "!!float", []string{".inf", ".Inf", ".INF"}},
{math.Inf(+1), "!!float", []string{"+.inf", "+.Inf", "+.INF"}}, {math.Inf(+1), "!!float", []string{"+.inf", "+.Inf", "+.INF"}},
{math.Inf(-1), "!!float", []string{"-.inf", "-.Inf", "-.INF"}}, {math.Inf(-1), "!!float", []string{"-.inf", "-.Inf", "-.INF"}},
{"<<", "!!merge", []string{"<<"}},
} }
m := resolveMap m := resolveMap
@@ -113,13 +113,8 @@ func resolve(tag string, in string) (rtag string, out interface{}) {
case 'D', 'S': case 'D', 'S':
// Int, float, or timestamp. // Int, float, or timestamp.
for i := 0; i != len(in); i++ { plain := strings.Replace(in, "_", "", -1)
if in[i] == '_' { intv, err := strconv.ParseInt(plain, 0, 64)
in = strings.Replace(in, "_", "", -1)
break
}
}
intv, err := strconv.ParseInt(in, 0, 64)
if err == nil { if err == nil {
if intv == int64(int(intv)) { if intv == int64(int(intv)) {
return "!!int", int(intv) return "!!int", int(intv)
@@ -127,26 +122,23 @@ func resolve(tag string, in string) (rtag string, out interface{}) {
return "!!int", intv return "!!int", intv
} }
} }
floatv, err := strconv.ParseFloat(in, 64) floatv, err := strconv.ParseFloat(plain, 64)
if err == nil { if err == nil {
return "!!float", floatv return "!!float", floatv
} }
if strings.HasPrefix(in, "0b") { if strings.HasPrefix(plain, "0b") {
intv, err := strconv.ParseInt(in[2:], 2, 64) intv, err := strconv.ParseInt(plain[2:], 2, 64)
if err == nil { if err == nil {
return "!!int", int(intv) return "!!int", int(intv)
} }
} else if strings.HasPrefix(in, "-0b") { } else if strings.HasPrefix(plain, "-0b") {
intv, err := strconv.ParseInt(in[3:], 2, 64) intv, err := strconv.ParseInt(plain[3:], 2, 64)
if err == nil { if err == nil {
return "!!int", -int(intv) return "!!int", -int(intv)
} }
} }
// XXX Handle timestamps here. // XXX Handle timestamps here.
case '<':
// XXX Handle merge (<<) here.
default: default:
panic("resolveTable item not yet handled: " + panic("resolveTable item not yet handled: " +
string([]byte{c}) + " (with " + in + ")") string([]byte{c}) + " (with " + in + ")")

View File

@@ -1,4 +1,4 @@
package goyaml package yaml
import ( import (
"bytes" "bytes"

View File

@@ -1,4 +1,4 @@
package goyaml package yaml
import ( import (
"reflect" "reflect"

View File

@@ -1,7 +1,7 @@
package goyaml_test package yaml_test
import ( import (
. "launchpad.net/gocheck" . "gopkg.in/check.v1"
"testing" "testing"
) )

View File

@@ -1,4 +1,4 @@
package goyaml package yaml
// Set the writer error and return false. // Set the writer error and return false.
func yaml_emitter_set_writer_error(emitter *yaml_emitter_t, problem string) bool { func yaml_emitter_set_writer_error(emitter *yaml_emitter_t, problem string) bool {

View File

@@ -1,5 +1,10 @@
// Package goyaml implements YAML support for the Go language. // Package yaml implements YAML support for the Go language.
package goyaml //
// Source code and other details for the project are available at GitHub:
//
// https://github.com/go-yaml/yaml
//
package yaml
import ( import (
"errors" "errors"
@@ -28,32 +33,31 @@ func handleErr(err *error) {
} }
} }
// Objects implementing the goyaml.Setter interface will receive the YAML // The Setter interface may be implemented by types to do their own custom
// tag and value via the SetYAML method during unmarshaling, rather than // unmarshalling of YAML values, rather than being implicitly assigned by
// being implicitly assigned by the goyaml machinery. If setting the value // the yaml package machinery. If setting the value works, the method should
// works, the method should return true. If it returns false, the given // return true. If it returns false, the value is considered unsupported
// value will be omitted from maps and slices. // and is omitted from maps and slices.
type Setter interface { type Setter interface {
SetYAML(tag string, value interface{}) bool SetYAML(tag string, value interface{}) bool
} }
// Objects implementing the goyaml.Getter interface will get the GetYAML() // The Getter interface is implemented by types to do their own custom
// method called when goyaml is requested to marshal the given value, and // marshalling into a YAML tag and value.
// the result of this method will be marshaled in place of the actual object.
type Getter interface { type Getter interface {
GetYAML() (tag string, value interface{}) GetYAML() (tag string, value interface{})
} }
// Unmarshal decodes the first document found within the in byte slice // Unmarshal decodes the first document found within the in byte slice
// and assigns decoded values into the object pointed by out. // and assigns decoded values into the out value.
// //
// Maps, pointers to structs and ints, etc, may all be used as out values. // Maps and pointers (to a struct, string, int, etc) are accepted as out
// If an internal pointer within a struct is not initialized, goyaml // values. If an internal pointer within a struct is not initialized,
// will initialize it if necessary for unmarshalling the provided data, // the yaml package will initialize it if necessary for unmarshalling
// but the struct provided as out must not be a nil pointer. // the provided data. The out parameter must not be nil.
// //
// The type of the decoded values and the type of out will be considered, // The type of the decoded values and the type of out will be considered,
// and Unmarshal() will do the best possible job to unmarshal values // and Unmarshal will do the best possible job to unmarshal values
// appropriately. It is NOT considered an error, though, to skip values // appropriately. It is NOT considered an error, though, to skip values
// because they are not available in the decoded YAML, or if they are not // because they are not available in the decoded YAML, or if they are not
// compatible with the out value. To ensure something was properly // compatible with the out value. To ensure something was properly
@@ -61,11 +65,11 @@ type Getter interface {
// field (usually the zero value). // field (usually the zero value).
// //
// Struct fields are only unmarshalled if they are exported (have an // Struct fields are only unmarshalled if they are exported (have an
// upper case first letter), and will be unmarshalled using the field // upper case first letter), and are unmarshalled using the field name
// name lowercased by default. When custom field names are desired, the // lowercased as the default key. Custom keys may be defined via the
// tag value may be used to tweak the name. Everything before the first // "yaml" name in the field tag: the content preceding the first comma
// comma in the field tag will be used as the name. The values following // is used as the key, and the following comma-separated options are
// the comma are used to tweak the marshalling process (see Marshal). // used to tweak the marshalling process (see Marshal).
// Conflicting names result in a runtime error. // Conflicting names result in a runtime error.
// //
// For example: // For example:
@@ -75,7 +79,7 @@ type Getter interface {
// B int // B int
// } // }
// var T t // var T t
// goyaml.Unmarshal([]byte("a: 1\nb: 2"), &t) // yaml.Unmarshal([]byte("a: 1\nb: 2"), &t)
// //
// See the documentation of Marshal for the format of tags and a list of // See the documentation of Marshal for the format of tags and a list of
// supported tag options. // supported tag options.
@@ -94,14 +98,16 @@ func Unmarshal(in []byte, out interface{}) (err error) {
// Marshal serializes the value provided into a YAML document. The structure // Marshal serializes the value provided into a YAML document. The structure
// of the generated document will reflect the structure of the value itself. // of the generated document will reflect the structure of the value itself.
// Maps, pointers to structs and ints, etc, may all be used as the in value. // Maps and pointers (to struct, string, int, etc) are accepted as the in value.
// //
// In the case of struct values, only exported fields will be serialized. // Struct fields are only unmarshalled if they are exported (have an upper case
// The lowercased field name is used as the key for each exported field, // first letter), and are unmarshalled using the field name lowercased as the
// but this behavior may be changed using the respective field tag. // default key. Custom keys may be defined via the "yaml" name in the field
// The tag may also contain flags to tweak the marshalling behavior for // tag: the content preceding the first comma is used as the key, and the
// the field. Conflicting names result in a runtime error. The tag format // following comma-separated options are used to tweak the marshalling process.
// accepted is: // Conflicting names result in a runtime error.
//
// The field tag format accepted is:
// //
// `(...) yaml:"[<key>][,<flag1>[,<flag2>]]" (...)` // `(...) yaml:"[<key>][,<flag1>[,<flag2>]]" (...)`
// //
@@ -126,8 +132,8 @@ func Unmarshal(in []byte, out interface{}) (err error) {
// F int "a,omitempty" // F int "a,omitempty"
// B int // B int
// } // }
// goyaml.Marshal(&T{B: 2}) // Returns "b: 2\n" // yaml.Marshal(&T{B: 2}) // Returns "b: 2\n"
// goyaml.Marshal(&T{F: 1}} // Returns "a: 1\nb: 0\n" // yaml.Marshal(&T{F: 1}} // Returns "a: 1\nb: 0\n"
// //
func Marshal(in interface{}) (out []byte, err error) { func Marshal(in interface{}) (out []byte, err error) {
defer handleErr(&err) defer handleErr(&err)
@@ -142,7 +148,7 @@ func Marshal(in interface{}) (out []byte, err error) {
// -------------------------------------------------------------------------- // --------------------------------------------------------------------------
// Maintain a mapping of keys to structure field indexes // Maintain a mapping of keys to structure field indexes
// The code in this section was copied from gobson. // The code in this section was copied from mgo/bson.
// structInfo holds details for the serialization of fields of // structInfo holds details for the serialization of fields of
// a given struct. // a given struct.

View File

@@ -1,4 +1,4 @@
package goyaml package yaml
import ( import (
"io" "io"

View File

@@ -1,4 +1,4 @@
package goyaml package yaml
const ( const (
// The size of the input raw buffer. // The size of the input raw buffer.

View File

@@ -1,14 +0,0 @@
[568].out
_*
*.cgo*.*
yaml-*/stamp-h1
yaml-*/Makefile
yaml-*/*/Makefile
yaml-*/libtool
yaml-*/config*
yaml-*/*/*.lo
yaml-*/*/*.la
yaml-*/*/.libs
yaml-*/*/.deps
yaml-*/tests/*

View File

@@ -1 +0,0 @@
propose -cr -for=lp:goyaml

View File

@@ -1,20 +0,0 @@
#!/bin/sh
set -e
BADFMT=`find * -name '*.go' | xargs gofmt -l`
if [ -n "$BADFMT" ]; then
BADFMT=`echo "$BADFMT" | sed "s/^/ /"`
echo -e "gofmt is sad:\n\n$BADFMT"
exit 1
fi
VERSION=`go version | awk '{print $3}'`
if [ $VERSION == 'devel' ]; then
go tool vet \
-methods \
-printf \
-rangeloops \
-printfuncs 'ErrorContextf:1,notFoundf:0,badReqErrorf:0,Commitf:0,Snapshotf:0,Debugf:0' \
.
fi

View File

@@ -1,39 +0,0 @@
include $(GOROOT)/src/Make.inc
YAML=yaml-0.1.3
LIBYAML=$(PWD)/$(YAML)/src/.libs/libyaml.a
TARG=launchpad.net/goyaml
GOFILES=\
goyaml.go\
resolve.go\
CGOFILES=\
decode.go\
encode.go\
CGO_OFILES+=\
helpers.o\
api.o\
scanner.o\
reader.o\
parser.o\
writer.o\
emitter.o\
GOFMT=gofmt
BADFMT:=$(shell $(GOFMT) -l $(GOFILES) $(CGOFILES) $(wildcard *_test.go))
all: package
gofmt: $(BADFMT)
@for F in $(BADFMT); do $(GOFMT) -w $$F && echo $$F; done
include $(GOROOT)/src/Make.pkg
ifneq ($(BADFMT),)
ifneq ($(MAKECMDGOALS),gofmt)
$(warning WARNING: make gofmt: $(BADFMT))
endif
endif