Compare commits

..

470 Commits

Author SHA1 Message Date
Alex Crawford
521ecfdab5 coreos-cloudinit: bump to 1.0.0 2014-11-26 14:19:13 -08:00
Alex Crawford
6d0fdf1a47 Merge pull request #268 from crawford/dropins
drop-in: add support for drop-ins
2014-11-26 14:14:49 -08:00
Alex Crawford
ffc54b028c drop-in: add support for drop-ins
This allows a list of drop-ins for a unit to be declared inline within a
cloud-config. For example:

  #cloud-config
  coreos:
    units:
      - name: docker.service
        drop-ins:
          - name: 50-insecure-registry.conf
            content: |
              [Service]
              Environment=DOCKER_OPTS='--insecure-registry="10.0.1.0/24"'
2014-11-26 14:09:35 -08:00
Alex Crawford
420f7cf202 system: clean up TestPlaceUnit() 2014-11-26 10:32:43 -08:00
Alex Crawford
624df676d0 config/unit: move Type() and Group() out of config 2014-11-26 10:32:43 -08:00
Alex Crawford
75ed8dacf9 initialize: clean up TestProcessUnits() 2014-11-26 10:32:43 -08:00
Alex Crawford
dcaabe4d4a system: clean up UnitManager interface 2014-11-26 10:32:43 -08:00
Alex Crawford
92c57423ba Merge pull request #269 from crawford/valid
validate: promote invalid values to an error
2014-11-26 10:32:27 -08:00
Alex Crawford
7447e133c9 validate: promote invalid values to an error 2014-11-26 10:29:09 -08:00
Eugene Yakubovich
4e466c12da Merge pull request #267 from thommay/flannel_unit
the flannel service is called flanneld
2014-11-25 12:30:58 -08:00
Thom May
333468dba3 the flannel service is called flanneld 2014-11-25 14:00:53 +00:00
Alex Crawford
55c3a793ad coreos-cloudinit: bump to 0.11.4+git 2014-11-21 20:11:54 -08:00
Alex Crawford
eca51031c8 coreos-cloudinit: bump to 0.11.4 2014-11-21 20:11:37 -08:00
Alex Crawford
19522bcb82 Merge pull request #266 from crawford/config
config: update configs to match etcd, fleet, and flannel
2014-11-21 20:10:34 -08:00
Alex Crawford
62248ea33d config/fleet: fix configs
Added EtcdKeyPrefix and fixed the types of EngineReconcileInterval and EtcdRequestTimeout.
2014-11-21 16:57:00 -08:00
Alex Crawford
d2a19cc86d config/flannel: correct - vs _ 2014-11-21 16:57:00 -08:00
Alex Crawford
08131ffab1 config/etcd: fix configs
This new table is pulled from the etcd codebase rather than the docs...

Added:
 GraphiteHost
 PeerElectionTimeout
 PeerHeartbeatInterval
 PeerKeyFile
 RetryInterval
 SnapshotCount
 StrTrace
 VeryVeryVerbose

Fixed types:
 ClusterActiveSize
 ClusterRemoveDelay
 ClusterSyncInterval
 HTTPReadTimeout
 HTTPWriteTimeout
 MaxResultBuffer
 MaxRetryAttempts
 Snapshot
 Verbose
 VeryVerbose

Renamed:
 Cors

Removed:
 MaxClusterSize
 CPUProfileFile
2014-11-21 16:57:00 -08:00
Alex Crawford
4a0019c669 config: add support for float64 2014-11-21 16:13:49 -08:00
Alex Crawford
3275ead1ec coreos-cloudinit: bump to 0.11.3+git 2014-11-21 12:25:26 -08:00
Alex Crawford
32b6a55724 coreos-cloudinit: bump to 0.11.3 2014-11-21 12:25:04 -08:00
Alex Crawford
6c43644369 Merge pull request #265 from crawford/update
config/update: add "off" as a valid strategy
2014-11-21 12:22:45 -08:00
Alex Crawford
e6593d49e6 config/update: add "off" as a valid strategy
It was assumed that the user would specify the reboot strategy as an
unquoted value. In the case that they turn off updates, `off` is
interpreted as a boolean and the normalization pass converts that to
`false`. In the event that the user uses `"off"`, it's interpreted as a
string and not modified.
2014-11-21 10:41:03 -08:00
Alex Crawford
ab752b239f coreos-cloudinit: bump to 0.11.2+git 2014-11-20 11:29:25 -08:00
Alex Crawford
0742e4d357 coreos-cloudinit: bump to 0.11.2 2014-11-20 11:29:12 -08:00
Alex Crawford
78f586ec9e Merge pull request #262 from crawford/permissions
config: fix parsing of file permissions
2014-11-20 11:28:11 -08:00
Alex Crawford
6f91b76d79 docs: correct type of permissions 2014-11-20 11:14:44 -08:00
Alex Crawford
5c80ccacc4 config: fix parsing of file permissions
The file permissions can be specified (unfortunately) as a string or an
octal integer. During the normalization step, every field is
unmarshalled into an interface{}. String types are kept in tact but
integers are converted to decimal integers. If the raw config
represented the permissions as an octal, it would be converted to
decimal _before_ it was saved to RawFilePermissions. Permissions() would
then try to convert it again, assuming it was an octal. The new behavior
doesn't assume the radix of the number, allowing decimal and octal
input.
2014-11-20 11:14:44 -08:00
Alex Crawford
97758b343b coreos-cloudinit: bump to 0.11.1+git 2014-11-18 12:14:34 -08:00
Alex Crawford
fb6f52b360 coreos-cloudinit: bump to 0.11.1 2014-11-18 12:14:29 -08:00
Alex Crawford
786cd2a539 Merge pull request #259 from crawford/hyphen
config/validate: disable - vs _ message for now
2014-11-18 12:12:26 -08:00
Alex Crawford
45793f1254 config/validate: disable - vs _ message for now 2014-11-18 12:11:50 -08:00
Alex Crawford
b621756d92 Merge pull request #258 from crawford/header
config/validate: fix line number for header check
2014-11-18 12:11:35 -08:00
Alex Crawford
a5b5c700a6 config/validate: fix line number for header check 2014-11-18 12:02:23 -08:00
Alex Crawford
d7602f3c08 Merge pull request #244 from eyakubovich/master
flannel: added flannel support and helper to make dropins
2014-11-14 10:46:19 -08:00
Eugene Yakubovich
a20addd05e flannel: added flannel support and helper to make dropins
fleet, flannel, and etcd all generate dropins from config.
To reduce code duplication, factor out a helper to do that.
2014-11-14 10:45:23 -08:00
Alex Crawford
d9d89a6fa0 coreos-cloudinit: bump to 0.11.0+git 2014-11-14 10:42:00 -08:00
Alex Crawford
3c26376326 coreos-cloudinit: bump to 0.11.0 2014-11-14 10:41:47 -08:00
Alex Crawford
d3294bcb86 Merge pull request #254 from crawford/validator
config: add new validator
2014-11-12 17:40:16 -08:00
Alex Crawford
dda314b518 flags: add validate flag
This will allow the user to run a standalone validation.
2014-11-12 16:48:57 -08:00
Alex Crawford
055a3c339a config/validate: add new config validator
This validator is still experimental and is going to need new rules in the
future. This lays out the general framework.
2014-11-12 16:48:57 -08:00
Alex Crawford
51f37100a1 config: remove config validator 2014-11-07 10:18:08 -08:00
Alex Crawford
88e8265cd6 config: seperate AssertValid and AssertStructValid
Added an error structure to make it possible to get the specifics of the failure.
2014-11-07 10:14:34 -08:00
Alex Crawford
6e2db882e6 script: move Script into config package 2014-11-07 10:13:52 -08:00
Alex Crawford
3e2823df1b Merge pull request #256 from crawford/hyphen
config: deprecate - in favor of _ for key names
2014-11-03 14:54:23 -08:00
Alex Crawford
46cb51cf91 Merge pull request #257 from crawford/networkd
networkd: remove double-restart workaround
2014-11-03 14:25:38 -08:00
Alex Crawford
1a6cee5305 networkd: remove double-restart workaround
The kernel fixed the underlying issue in 763e0ec and e721f87.
2014-11-03 14:11:15 -08:00
Alex Crawford
d02aa18839 config: deprecate - in favor of _ for key names
In all of the YAML tags, - has been replaced with _. normalizeConfig() and
normalizeKeys() have also been added to perform the normalization of the input
cloud-config.

As part of the normalization process, falsey values are converted to "false".
The "off" update strategy is no exception and as a result the "off" update
strategy has been changed to "false".
2014-11-03 12:09:52 -08:00
Alex Crawford
e9bda98b54 Merge pull request #252 from crawford/vet
go vet
2014-10-23 12:03:01 -07:00
Alex Crawford
badc874b74 travis: install go vet 2014-10-23 11:47:24 -07:00
Alex Crawford
c9e8c887b8 test: run go vet 2014-10-23 11:46:40 -07:00
Alex Crawford
8be307de49 *: fix warnings from go vet 2014-10-23 11:46:08 -07:00
Alex Crawford
562c474275 system: embed config within EtcHosts and Update 2014-10-23 11:44:15 -07:00
Jonathan Boulle
5c5834863b Merge pull request #250 from jonboulle/master
*: switch to Godeps
2014-10-20 12:09:04 -07:00
Jonathan Boulle
44f0a949c5 *: switch to Godeps 2014-10-20 12:04:03 -07:00
Jonathan Boulle
106c4e7a2c Merge pull request #249 from jonboulle/license_header
*: add license header to all source files
2014-10-17 15:42:20 -07:00
Jonathan Boulle
6c1ba590aa *: add license header to all source files 2014-10-17 15:36:22 -07:00
Alex Crawford
45da664c59 Merge pull request #246 from crawford/master
Add support for Azure
2014-10-12 21:37:34 -07:00
Alex Crawford
2a71551ef2 azure: add support for azure (via azure agent) 2014-10-11 09:19:47 -07:00
Alex Crawford
84e1cb3242 datasource/waagent: add support for WAAgent metadata 2014-10-11 09:19:47 -07:00
Jonathan Boulle
5214ead926 Merge pull request #245 from jonboulle/units
init: simplify CloudConfigUnit interface
2014-10-06 15:26:36 -07:00
Jonathan Boulle
e2c24c4cef init: simplify CloudConfigUnit interface 2014-10-06 15:14:29 -07:00
Alex Crawford
75e288c553 coreos-cloudinit: bump to 0.10.4+git 2014-09-24 19:25:55 -07:00
Alex Crawford
0785840fe3 coreos-cloudinit: bump to 0.10.4 2014-09-24 19:25:34 -07:00
Alex Crawford
c10bfc2f56 Merge pull request #240 from epankala/euca4_compat_fix
AWS: Eucalyptus 4.x compatibility fix
2014-09-24 10:55:39 -07:00
Janne Paenkaelae
2f954dcdc2 AWS: Eucalyptus 4.x compatibility fix
For Eucalyptus 4.0.1 requesting metadata seem to work differently as with EC2.

In Euca:
> curl http://169.254.169.254/2009-04-04
<?xml version="1.0"?><Response><Errors><Error><Code>404 Not Found</Code><Message>unknown</Message></Error></Errors><RequestID>unknown</RequestID></Response>core@localhost ~ $

> curl http://169.254.169.254/2009-04-04/
dynamic
meta-data
user-data

In AWS EC2
> curl http://169.254.169.254/2009-04-04
"" (zero bytes)

> curl http://169.254.169.254/2009-04-04/
dynamic
meta-data
user-data

As the isAvailable() function in metadata.go tests only for errorcode
it fails in Euca.
2014-09-24 20:33:29 +03:00
Alex Crawford
cdfc94f4e9 Merge pull request #234 from crawford/validate
config: explicitly specify fields and seperate config and application
2014-09-24 07:42:09 -07:00
Alex Crawford
18e2f98414 cloudconfig: refactor config
- Move CloudConfig into config package
- Add YAML tags to CloudConfig
2014-09-23 17:59:32 -07:00
Alex Crawford
4b472795c4 user: move User into config package
- Add YAML tags for the fields
2014-09-23 17:59:19 -07:00
Alex Crawford
85b8d804c8 file: refactor config
- Seperate the config from Permissions()
- Add YAML tags for the fields
2014-09-23 17:59:16 -07:00
Alex Crawford
1fbbaaec19 unit: refactor config
- Seperate the config from Destination()
- Add YAML tags for the fields
2014-09-23 17:58:32 -07:00
Alex Crawford
667dbd8fb7 update: refactor config
- Explicitly specify all of the valid options for Update
- Seperate the config from File() and Units()
- Add YAML tags for the fields
2014-09-23 17:57:43 -07:00
Alex Crawford
6730cb7227 oem: refactor the config
- Seperate the config from File()
- Add YAML tags for the fields
2014-09-23 16:08:23 -07:00
Alex Crawford
9454522033 fleet: refactor config
- Explicitly specify all of the valid options for fleet
- Seperate the config from Units()
- Add YAML tags for the fields
2014-09-23 16:07:53 -07:00
Alex Crawford
c255739a93 etcd: refactor config
- Explicitly specify all of the valid options for etcd
- Remove the default name generation (ETCD_NAME is set by its unit file now)
- Seperate the etcd config from Units()
- Remove support for DISCOVERY_URL
- Add YAML tags for the fields
2014-09-23 16:07:13 -07:00
Alex Crawford
2051cd3e1c Merge pull request #238 from crawford/docs
docs: fix documentation of coreos.units.command
2014-09-23 11:33:44 -07:00
Alex Crawford
b52cb3fea3 docs: fix documentation of coreos.units.command 2014-09-23 11:32:15 -07:00
Alex Crawford
da5f85b3fb coreos-cloudinit: bump to 0.10.3+git 2014-09-17 12:19:27 -07:00
Alex Crawford
9999178538 coreos-cloudinit: bump to 0.10.3 2014-09-17 12:19:13 -07:00
Alex Crawford
8f766e4666 Merge pull request #235 from crawford/routes
network: add support for CIDR addresses Debian routes
2014-09-17 12:18:16 -07:00
Alex Crawford
2d28d16c92 network: add support for CIDR addresses Debian routes
OnMetal is changing their template from:
`route add -net 1.2.3.0 netmask 255.255.255.0 gw 10.1.2.1 || true`
to:
`route add -net 1.2.3.0/24 gw 10.1.2.1 || true`
2014-09-16 17:36:34 -07:00
Alex Crawford
e9cd09dd7b coreos-cloudinit: bump to 0.10.2+git 2014-09-14 08:19:57 -07:00
Alex Crawford
8370b30aa2 coreos-cloudinit: bump to 0.10.2 2014-09-14 08:19:33 -07:00
Alex Crawford
3e015cc3a1 Merge pull request #233 from crawford/configdrive
configdrive: don't fail if no network config was provided
2014-09-14 08:18:14 -07:00
Alex Crawford
a0fe6d0884 configdrive: return an empty network config when filename is empty
Additionally, don't bother checking for a network config if it isn't going to
be processed.
2014-09-13 21:51:51 -07:00
Alex Crawford
585ce5fcd9 Revert "metadata: don't fail if no network config was provided"
This reverts commit c1f373e648.
2014-09-13 21:01:42 -07:00
Alex Crawford
72445796ca coreos-cloudinit: bump to 0.10.1+git 2014-09-12 16:48:15 -07:00
Alex Crawford
7342d91a85 coreos-cloudinit: bump to 0.10.1 2014-09-12 16:47:58 -07:00
Alex Crawford
db1bc51c98 Merge pull request #231 from crawford/netconf
metadata: don't fail if no network config was provided
2014-09-12 16:35:24 -07:00
Alex Crawford
c1f373e648 metadata: don't fail if no network config was provided 2014-09-12 16:29:27 -07:00
Alex Crawford
db49a16002 coreos-cloudinit: bump to 0.10.0+git 2014-09-11 17:37:05 -07:00
Alex Crawford
a4a6c281d9 coreos-cloudinit: bump to 0.10.0 2014-09-11 17:36:38 -07:00
Alex Crawford
17f8733121 Merge pull request #228 from crawford/sub
env: add support for escaping environment substitutions
2014-09-11 15:34:03 -07:00
Alex Crawford
7dec922618 env: add support for escaping environment substitutions 2014-09-11 15:30:33 -07:00
Alex Crawford
54d3ae27af Merge pull request #226 from crawford/oem
flags: add oem flag
2014-09-11 13:25:21 -07:00
Alex Crawford
ee2416af64 flags: move the flags into their own namespace 2014-09-11 12:00:17 -07:00
Alex Crawford
cda037f9a5 flags: add oem flag
The oem flag will allow each of the OEMs to specify one flag only, acting as a
shortcut to their specific configuration. This will allow us to update which
options each OEM uses when running cloudinit.
2014-09-11 12:00:17 -07:00
Alex Crawford
549806cf64 Merge pull request #227 from crawford/ipv6
metadata: add support for IPv6 variable substitution
2014-09-11 10:45:33 -07:00
Alex Crawford
56815a6756 metadata: add support for IPv6 variable substitution 2014-09-11 10:43:02 -07:00
Alex Crawford
24a6f7c49c Merge pull request #225 from crawford/exit
userdata: change handling of bad userdata
2014-09-10 19:16:12 -07:00
Alex Crawford
98484be434 userdata: change handling of bad userdata
Don't fail after encountering bad userdata. Continue processing the metadata
and then exit. This will allow people with bad userdata to actually log in and
see the error.
2014-09-10 17:50:23 -07:00
Jonathan Boulle
9024659296 Merge pull request #217 from ecnahc515/patch-1
Fix broken link to fleet config
2014-09-09 15:19:18 -07:00
Chance Zibolski
fc6940f7ba Documentation: More specific link to fleet config.
Add an anchor tag to the url to take person directly to config section.
2014-09-09 15:15:55 -07:00
Brian Waldon
f2fd95699b Merge pull request #224 from bcwaldon/typo
docs: fix a typo
2014-09-09 12:36:42 -07:00
bdevloed
65db96cc7c docs: fix a typo 2014-09-09 12:31:54 -07:00
Alex Crawford
c17b93b5c0 Merge pull request #223 from crawford/yaml
third_party: sync third_party/gopkg.in/yaml.v1
2014-09-08 19:28:59 -07:00
Alex Crawford
d352f8ce6a Merge pull request #222 from crawford/contribute
docs: Update maintainers and contribution guide
2014-09-08 15:54:30 -07:00
Alex Crawford
78aa2c56ec yaml: replace goyaml with yaml 2014-09-08 13:25:27 -07:00
Alex Crawford
c5b3788282 third_party: sync third_party/gopkg.in/yaml.v1
Update launchpad.net/goyaml to gopkg.in/yaml.v1
2014-09-08 13:23:50 -07:00
Alex Crawford
5e98970bb5 docs: Update maintainers and contribution guide 2014-09-08 12:55:17 -07:00
Alex Crawford
cbdd446c55 Merge pull request #220 from crawford/docs
docs: Update list of platforms supporting variable substitutions
2014-09-05 11:51:45 -07:00
Alex Crawford
316cadcf44 docs: Update list of platforms supporting variable substitutions 2014-09-04 12:57:19 -07:00
Alex Crawford
5a939be21b coreos-cloudinit: bump to 0.9.6+git 2014-09-02 17:49:09 -07:00
Alex Crawford
8d76c64386 coreos-cloudinit: bump to 0.9.6 2014-09-02 17:48:45 -07:00
Alex Crawford
1b854eb51e Merge pull request #218 from crawford/units
units: Ensure that the units are executed in order
2014-09-02 17:40:37 -07:00
Alex Crawford
9fcf338bf3 units: Ensure that the units are executed in order 2014-09-02 17:15:32 -07:00
Alex Crawford
fda72bdb5c coreos-cloudinit: bump to 0.9.5+git 2014-09-02 10:10:59 -07:00
Alex Crawford
685a38c6c8 coreos-cloudinit: bump to 0.9.5 2014-09-02 10:10:41 -07:00
Alex Crawford
9d15f2cfaf Merge pull request #213 from crawford/digitalocean
digitalocean: Add support for DigitalOcean
2014-09-01 16:55:12 -07:00
Alex Crawford
2134fce791 digitalocean: Add tests for network unit generation 2014-09-01 16:53:15 -07:00
Alex Crawford
3abd6b2225 digitalocean: Add DigitalOcean metadata service
Move debian-related processing into its own file.
2014-09-01 16:53:15 -07:00
Alex Crawford
2a8e6c9566 network: Fall back to MAC address if there is no name 2014-09-01 09:29:45 -07:00
Alex Crawford
abe43537da metadata: Merge the network config 2014-09-01 09:29:45 -07:00
Jonathan Boulle
3a550af651 Merge pull request #216 from robszumski/patch-2
docs: fix broken link to fleet docs
2014-08-29 11:22:13 -07:00
Rob Szumski
61c3a0eb2d docs: fix broken link to fleet docs 2014-08-29 11:17:05 -07:00
Brian Waldon
480176bc11 Merge pull request #214 from bcwaldon/clarify-write-files
doc: clarify docs around write_files
2014-08-28 20:24:11 -07:00
Brian Waldon
01b18eb551 squash: fix spacing 2014-08-28 13:48:58 -07:00
Brian Waldon
970ef435b6 doc: clarify docs around write_files 2014-08-28 13:33:59 -07:00
Alex Crawford
e8d0021140 Merge pull request #212 from crawford/metadata
refactor: Refactor metadata and datasources to be more testable
2014-08-26 18:45:10 -07:00
Alex Crawford
e9ec78ac6f test: Refactor interface tests a little 2014-08-26 13:18:46 -07:00
Alex Crawford
4a2e417781 network: Add support for multiple addresses and HW addresses
Logical Interfaces can be assigned a hardware address allowing them
to match on MAC address. The static config method also needs to
support specifying multiple addresses.
2014-08-26 13:07:38 -07:00
Alex Crawford
604ef7ecb4 datasource: Add FetchNetworkConfig
FetchNetworkConfig is currently only used by ConfigDrive to read the
network config file from the disk.
2014-08-26 13:04:43 -07:00
Alex Crawford
c39dd5cc67 networkd: Fix bug causing bonding to always be loaded 2014-08-26 13:04:21 -07:00
Alex Crawford
a923161f4a metadata: Refactor common parts out of ec2 2014-08-26 12:02:56 -07:00
Alex Crawford
e59e2f6cd5 Merge pull request #210 from crawford/test
test: Add gofmt to test
2014-08-25 17:04:04 -07:00
Alex Crawford
e90fe3eba8 test: Add gofmt to test 2014-08-25 12:48:52 -07:00
Alex Crawford
fb0187b197 gofmt: sort 2014-08-25 12:35:40 -07:00
Michael Marineau
6babe74716 Merge pull request #209 from marineam/go13
travis: enable testing under go 1.3
2014-08-25 12:26:23 -07:00
Michael Marineau
b1e88284ca travis: enable testing under go 1.3 2014-08-25 12:21:07 -07:00
Alex Crawford
18a65f7dac Merge pull request #208 from crawford/go
test: Fix tests for Go 1.3
2014-08-25 12:19:52 -07:00
Alex Crawford
0c212c72c9 test: Fix tests for Go 1.3 2014-08-25 12:01:27 -07:00
Alex Crawford
6a800d8cc0 coreos-cloudinit: bump to 0.9.4+git 2014-08-24 18:41:20 -07:00
Alex Crawford
5e112147bb coreos-cloudinit: bump to 0.9.4 2014-08-24 18:40:53 -07:00
Alex Crawford
7e78b1563f Merge pull request #206 from crawford/tests
test: Enable tests for CloudSigma datasource
2014-08-24 18:36:38 -07:00
Alex Crawford
ecbe81f103 test: Enable tests for CloudSigma datasource 2014-08-24 17:08:49 -07:00
Alex Crawford
45c20c1dd3 Merge pull request #196 from Vladimiroff/cloudsigma
cloudsigma: Add support for CloudSigma datasource
2014-08-15 15:21:33 -07:00
Alex Crawford
8ce925a060 coreos-cloudinit: bump to 0.9.3+git 2014-08-15 10:47:28 -07:00
Alex Crawford
eadb6ef42c coreos-cloudinit: bump to 0.9.3 2014-08-15 10:46:46 -07:00
Alex Crawford
7518f0ec93 Merge pull request #204 from crawford/configdrive
configdrive: Remove broken support for ec2 metadata
2014-08-15 10:43:26 -07:00
Alex Crawford
f0b9eaf2fe configdrive: Remove broken support for ec2 metadata
As it turns out, certain metadata is only present in the ec2 flavor
of metadata (e.g. public_ipv4) and other data is only present in
the openstack flavor (e.g. network_config). For now, just read the
openstack metadata.
2014-08-15 10:35:21 -07:00
Kiril Vladimirov
7320a2cbf2 feat(datasource/metadata): Add datasource for CloudSigma 2014-08-15 12:08:55 +03:00
Kiril Vladimirov
57950b3ed9 add(goserial): import github.com/tarm/goserial 2014-08-15 12:08:34 +03:00
Kiril Vladimirov
85c6a2a16a add(cepgo): import github.com/cloudsigma/cepgo 2014-08-15 12:07:58 +03:00
Jonathan Boulle
24b44e86a6 coreos-cloudinit: bump to 0.9.2+git 2014-08-12 11:38:51 -07:00
Jonathan Boulle
2f52ad4ef8 coreos-cloudinit: bump to 0.9.2 2014-08-12 11:38:12 -07:00
Jonathan Boulle
735d6c6161 Merge pull request #202 from jonboulle/env
environment: write new keys in consistent order
2014-08-11 22:40:42 -07:00
Alex Crawford
1cf275bad6 Merge pull request #201 from crawford/configdrive
configdrive: fix root path
2014-08-11 20:11:17 -07:00
Jonathan Boulle
f1c97cb4d5 environment: write new keys in consistent order 2014-08-11 18:24:58 -07:00
Alex Crawford
d143904aa9 configdrive: fix root path 2014-08-11 17:57:10 -07:00
Jonathan Boulle
c428ce2cc5 Merge pull request #200 from jonboulle/fu
initialize: use correct heuristic to check if etcdenvironment is set
2014-08-11 17:44:44 -07:00
Jonathan Boulle
dfb5b4fc3a initialize: use correct heuristic to check if etcdenvironment is set
In some circumstances (e.g. nova-agent-watcher) cloudconfig files will
be created where the EtcdEnvironment is an empty map, and hence != nil.
If this is the case we should not do anything at all (because the user
hasn't explicitly asked us to configure etcd). This change standardises
behaviour with the check that we already do for FleetEnvironment.
2014-08-11 16:01:08 -07:00
Alex Crawford
97d5538533 Merge pull request #197 from crawford/ec2
datasource: Fix ec2 URLs
2014-08-06 22:45:03 -07:00
Alex Crawford
6b8f82b5d3 datasource: Fix ec2 URLs
_ vs -
2014-08-06 21:31:43 -07:00
Alex Crawford
facde6609f Merge pull request #194 from crawford/metadata
datasource: Refactoring datasources
2014-08-06 15:55:13 -07:00
Alex Crawford
d68ae84b37 metadata: Refactor metadata service into ec2 metadata
Added more testing.
2014-08-05 17:19:43 -07:00
Alex Crawford
54aa39543b timeouts: Use After() instead of Tick() 2014-08-04 15:10:14 -07:00
Alex Crawford
8566a2c118 datasource: Move datasources into their own packages. 2014-08-04 15:10:07 -07:00
Alex Crawford
49ac083af5 coreos-cloudinit: bump to 0.9.1+git 2014-08-04 14:14:24 -07:00
Alex Crawford
5d65ca230a coreos-cloudinit: bump to 0.9.1 2014-08-04 14:13:51 -07:00
Alex Crawford
38b3e1213a Merge pull request #188 from crawford/configdrive
configdrive: Use the EC2 metadata over OpenStack
2014-08-04 11:12:06 -07:00
Alex Crawford
4eedca26e9 configdrive: Use the EC2 metadata over OpenStack
Standardize on specific EC2 and OpenStack versions and add tests.
2014-08-04 10:18:29 -07:00
Brian Waldon
f2b342c8be doc: escape user.home example 2014-08-01 13:20:44 -07:00
Michael Marineau
c19d8f6b61 Merge pull request #193 from benjic/cloudconfig_variables
docs(quick-start): Clarified use of fields in cloud config
2014-07-24 11:02:03 -07:00
Benjamin Campbell
7913f74351 docs(quick-start) Enumerated supported platforms
Following suggestion a list of platforms that *do* support cloud config variables. In addition minor mark up formatting is added.
2014-07-24 11:54:31 -06:00
Benjamin Campbell
5593408be8 docs(quick-start): Clarified use of fields in cloud config
Updated the language to illustrate that fields in a cloud config is not
supported in all environments. This is expressed explicitly in PXE and
install to disk pages. The quick start lacked this information and is
inconsistent.
2014-07-24 11:27:35 -06:00
Alex Crawford
7fc67c2acf Merge pull request #191 from crawford/panic
config: Verify that type assertions are valid
2014-07-22 11:51:39 -07:00
Alex Crawford
b093094292 config: Verify that type assertions are valid 2014-07-22 11:39:20 -07:00
Michael Marineau
9a80fd714a Merge pull request #181 from robszumski/docs-startup
fix(docs): clarity around boot behavior and unit usage
2014-07-21 22:12:19 -07:00
Rob Szumski
fef5473881 fix(docs): clarity around boot behavior and unit usage 2014-07-21 21:41:00 -07:00
Alex Crawford
bf5a2b208f coreos-cloudinit: bump to 0.9.0+git 2014-07-21 19:17:14 -07:00
Alex Crawford
364507fb75 coreos-cloudinit: bump to 0.9.0 2014-07-21 19:16:11 -07:00
Alex Crawford
08d4842502 Merge pull request #190 from crawford/logs
Logs
2014-07-21 12:22:41 -07:00
Alex Crawford
21e32e44f8 system: Add more logging for networkd 2014-07-21 11:25:22 -07:00
Alex Crawford
7a06dee16f system: Cleanup redundant code 2014-07-21 11:24:42 -07:00
Alex Crawford
ff9cf5743d Merge pull request #187 from crawford/order
networkd: Reverse lexicographic order of generated unit files
2014-07-18 13:23:58 -07:00
Alex Crawford
1b10a3a187 networkd: Reverse lexicographic order of generated unit files 2014-07-17 20:47:37 -07:00
Michael Marineau
10838e001d Merge pull request #186 from robszumski/add-highlighting
feat(docs): add syntax highlighting
2014-07-15 15:26:33 -07:00
Rob Szumski
96370ac5b9 feat(docs): add syntax highlighting 2014-07-14 16:16:14 -07:00
Michael Marineau
0b82cd074d Merge pull request #180 from marineam/systemd_testing
chore(*): split out unit processing from config.Apply
2014-07-11 20:09:08 -07:00
Alex Crawford
a974e85103 Merge pull request #174 from crawford/teeth
networkd: Fix issues with bonding and VLANs
2014-07-11 15:48:02 -07:00
Michael Marineau
f0450662b0 Merge pull request #183 from marineam/fix
tests: fix error messages, use Fatalf
2014-07-11 15:40:54 -07:00
Michael Marineau
03e29d1291 tests: fix error messages, use Fatalf 2014-07-11 15:38:04 -07:00
Michael Marineau
98ae5d88aa coreos-cloudinit: bump to 0.8.9+git 2014-07-11 14:40:57 -07:00
Michael Marineau
bf5d3539c9 coreos-cloudinit: bump to 0.8.9 2014-07-11 14:40:34 -07:00
Michael Marineau
5e4cbcd909 Merge pull request #182 from marineam/fix
env_file: fix broken test cases
2014-07-11 14:38:56 -07:00
Michael Marineau
a270c4c737 env_file: fix broken test cases
TestWriteEnvFileDos2Unix had a copy/paste bug, it shouldn't have
asserted that mtime doesn't change because the file is actually being
modified in this test. This didn't come up earlier because the actual
comparison wasn't using Time.Equal as it should have.

Instead switch to comparing inode numbers which is the actual thing I
wanted to test for in the first place, just accessing them is much more
awkard. Now all tests where it is relevant check the inode in addition
to the contents.
2014-07-11 13:35:10 -07:00
Michael Marineau
f356a8a690 coreos-cloudinit: bump to 0.8.8+git 2014-07-11 11:13:01 -07:00
Michael Marineau
b1a897d75c coreos-cloudinit: bump to 0.8.8 2014-07-11 11:12:15 -07:00
Jonathan Boulle
be51f4eba0 chore(*): split out unit processing from config.Apply 2014-07-11 10:44:19 -07:00
Michael Marineau
a55e2cd49b Merge pull request #178 from marineam/env
Write /etc/environment
2014-07-11 10:39:33 -07:00
Michael Marineau
983501e43b environment: add support for updating /etc/environment with IP values
To maintain the behavior of the coreos-setup-environment that has
started to move into cloudinit we need to write out /etc/environment
with the public and private addresses, if known. The file is updated so
that other contents are not replaced. This behavior is disabled entirely
if /etc/environment was written by a write_files entry.
2014-07-11 10:34:44 -07:00
Alex Crawford
e3037f18a6 networkd: Restart networkd twice to work around race
https://bugs.freedesktop.org/show_bug.cgi?id=76077
2014-07-10 23:40:42 -07:00
Alex Crawford
fe388a3ab6 networkd: Create config directory before writing config 2014-07-10 23:40:42 -07:00
Alex Crawford
c820f2b1cf bonding: Add support for probing the bonding module with parameters
Until support for bonding params is added to networkd, this will be
neccessary in order to use bonding parameters (i.e. miimon, mode).
This also makes it such that the 8012q module will only be loaded if
the network config makes use of VLANs.
2014-07-10 23:40:42 -07:00
Michael Marineau
81824be3bf system: new file writer for updating env-style files
This can be used to safely update config files cloudinit does not have
exclusive control over. For example update.conf or /etc/environment.
2014-07-10 15:53:32 -07:00
Michael Marineau
98c26440be Merge pull request #176 from jayofdoom/master
Document need for #cloud-config in cloud-config.yml
2014-07-09 16:41:00 -07:00
Jay Faulkner
3b5fcc393b Document need for #cloud-config in cloud-config.yml
- cloud-config.yml does not work if it's missing the #cloud-config
  directive at the top. This is undocumented, except in the examples.
2014-07-09 16:36:11 -07:00
Alex Crawford
9528077340 coreos-cloudinit: bump to 0.8.7+git 2014-07-02 15:20:45 -07:00
Alex Crawford
4355a05d55 coreos-cloudinit: bump to 0.8.7 2014-07-02 15:20:26 -07:00
Alex Crawford
52c44923dd Merge pull request #173 from crawford/metadata
metadata-service: remove check for OpenStack meta_data.json
2014-07-02 15:19:37 -07:00
Alex Crawford
47748ef4b6 metadata-service: remove check for OpenStack meta_data.json
The meta_data.json blob under OpenStack doesn't actually contain all
of the metadata... Fall back to explicitly requesting each attribute.
2014-07-02 14:38:23 -07:00
Alex Crawford
8eca10200e coreos-cloudinit: bump to 0.8.6+git 2014-07-01 16:17:00 -07:00
Alex Crawford
43be8c8996 coreos-cloudinit: bump to 0.8.6 2014-07-01 16:16:41 -07:00
Alex Crawford
19b4b1160e Merge pull request #171 from crawford/err
metadata-service: Handle no user-data
2014-07-01 16:15:32 -07:00
Alex Crawford
ce6fccfb3c metadata-service: Handle no user-data 2014-07-01 16:10:18 -07:00
Alex Crawford
7d89aefb82 coreos-cloudinit: bump to 0.8.5+git 2014-07-01 15:45:49 -07:00
Alex Crawford
2369e2a920 coreos-cloudinit: bump to 0.8.5 2014-07-01 15:45:23 -07:00
Alex Crawford
6d808048d3 Merge pull request #170 from crawford/metadata
metadata: Fetch the public and private IP addresses
2014-07-01 15:44:14 -07:00
Alex Crawford
276f0b5d99 metadata: Fetch the public and private IP addresses 2014-07-01 14:43:19 -07:00
Jonathan Boulle
92bd5ca5d4 coreos-cloudinit: bump to 0.8.4+git 2014-07-01 12:16:09 -07:00
Jonathan Boulle
5b5ffea126 coreos-cloudinit: bump to 0.8.4 2014-07-01 12:15:48 -07:00
Jonathan Boulle
18068e9375 Merge pull request #169 from jonboulle/pebkac
coreos-cloudinit: apply environment to userdata string
2014-07-01 12:15:06 -07:00
Jonathan Boulle
1b3cabb035 coreos-cloudinit: apply environment to userdata string 2014-07-01 12:08:42 -07:00
Jonathan Boulle
1be2bec1c2 coreos-cloudinit: bump to 0.8.3+git 2014-06-30 22:12:13 -07:00
Jonathan Boulle
f3bd5f543e coreos-cloudinit: bump to 0.8.3 2014-06-30 22:11:15 -07:00
Jonathan Boulle
660feb59b9 Merge pull request #168 from jonboulle/foo
fix ordering error in mergeCloudConfig
2014-06-30 22:08:47 -07:00
Jonathan Boulle
9673dbe12b coreos-cloudinit: fix ordering error in merge invocation 2014-06-30 22:07:05 -07:00
Alex Crawford
2be435dd83 coreos-cloudinit: bump to 0.8.2+git 2014-06-30 18:11:14 -07:00
Alex Crawford
2d91369596 coreos-cloudinit: bump to 0.8.2 2014-06-30 18:10:20 -07:00
Alex Crawford
d8d3928978 Merge pull request #167 from crawford/sshkeys
metadata-service: fix ssh key retrieval and application
2014-06-30 18:08:04 -07:00
Alex Crawford
7fcc540154 metadata-service: fix ssh key retrieval and application
The metadata service wasn't properly fetching the ssh keys from metadata.
Drop the key traversal in favor of explict key urls.
2014-06-30 17:45:08 -07:00
Jonathan Boulle
cb7fbd4668 Merge pull request #166 from jonboulle/merge
cloudinit: merge cloudconfig info from user-data and meta-data
2014-06-30 11:27:47 -07:00
Jonathan Boulle
d4e048a1f4 ParseUserData: return nil on empty input string 2014-06-30 11:27:33 -07:00
Jonathan Boulle
231c0fa20b initialize: add tests for ParseMetadata 2014-06-27 23:53:06 -07:00
Jonathan Boulle
1aabacc769 cloudinit: merge cloudconfig info from user-data and meta-data
This attempts to retrieve cloudconfigs from two sources: the meta-data
service, and the user-data service. If only one cloudconfig is found,
that is applied to the system. If both services return a cloudconfig,
the two are merged into a single cloudconfig which is then applied to
the system.

Only a subset of parameters are merged (because the meta-data service
currently only partially populates a cloudconfig). In the event of any
conflicts, parameters in the user-data cloudconfig take precedence over
those in the meta-data cloudconfig.
2014-06-27 23:48:48 -07:00
Alex Crawford
6a2927d701 coreos-cloudinit: bump to 0.8.1+git 2014-06-27 15:00:05 -07:00
Alex Crawford
126188510b coreos-cloudinit: bump to 0.8.1 2014-06-27 14:59:56 -07:00
Alex Crawford
4627ccb444 Merge pull request #165 from crawford/units
units: update dependencies
2014-06-27 14:55:48 -07:00
Alex Crawford
aff372111a units: update dependencies 2014-06-27 14:29:59 -07:00
Alex Crawford
c7081b9918 coreos-cloudinit: bump to 0.8.0+git 2014-06-27 11:33:56 -07:00
Alex Crawford
9ba3b18b59 coreos-cloudinit: bump to 0.8.0 2014-06-27 11:32:52 -07:00
Alex Crawford
099de62e9a Merge pull request #164 from crawford/datasources
datasources: Add support for specifying multiple datasources
2014-06-27 00:25:09 -07:00
Alex Crawford
c089216cb5 datasources: Add support for specifying multiple datasources
If multiple sources are specified, the first available source is used.
2014-06-26 22:32:39 -07:00
Alex Crawford
68dc902ed1 HttpClient: Refactor timeout into two seperate functions 2014-06-26 15:16:22 -07:00
Jonathan Boulle
ad66b1c92f Merge pull request #163 from jonboulle/net
coreos-cloudinit: restrict convert-netconf to configdrive
2014-06-25 14:35:27 -07:00
Jonathan Boulle
fbdece2762 coreos-cloudinit: restrict convert-netconf to configdrive 2014-06-25 14:28:11 -07:00
Jonathan Boulle
f85eafb7ca Merge pull request #162 from jonboulle/fffffffffffffffffff
initialize/env: handle nil substitution maps properly
2014-06-25 12:10:00 -07:00
Jonathan Boulle
f0dba2294e initialize/env: handle nil substitution maps properly 2014-06-25 12:07:48 -07:00
Alex Crawford
bda3948382 Merge pull request #159 from crawford/metadata
metadataService: Check both ec2 and openstack urls more explicitly
2014-06-25 11:26:19 -07:00
Alex Crawford
fae81c78f3 metadataService: Check both ec2 and openstack urls more explicitly
Remove the root url parameter for -from-metadata-service since this
is a guarenteed value. Additionally, check for both ec2 and openstack
urls for the metadata and userdata. Fix a bug with the -from-url
option and a panic on an empty response.
2014-06-25 11:19:11 -07:00
Alex Crawford
a5dec7d7bd cloudconfig: Process metadata before userdata
This gives the options in userdata a higher precedence over metadata.
2014-06-25 10:35:26 -07:00
Jonathan Boulle
e1222c9885 Merge pull request #161 from jonboulle/doc
metadata: add links to metadata source information
2014-06-25 08:55:26 -07:00
Jonathan Boulle
ded3bcf122 metadata: add links to metadata source information 2014-06-24 19:26:07 -07:00
Jonathan Boulle
80d00cde94 Merge pull request #158 from jonboulle/nettttt
cloudinit: retrieve IPv4 addresses from metadata
2014-06-24 19:11:15 -07:00
Jonathan Boulle
2805d70ece initialize/env: add notes about tests 2014-06-24 18:52:08 -07:00
Jonathan Boulle
439b7e8b98 initialize/env: fall back to COREOS_*_IPV4 env variables 2014-06-24 18:49:49 -07:00
Jonathan Boulle
ba1c1e97d0 cloudinit: retrieve IPv4 addresses from metadata
This uses the new MetadataService implementation to retrieve values for
$private_ipv4 and $public_ipv4 substitutions, instead of using
environment variables.
2014-06-24 17:46:06 -07:00
Alex Crawford
8a50fd8595 Merge pull request #154 from crawford/metadata
metadata-service: Add new datasource to download from metadata service
2014-06-24 15:18:27 -07:00
Alex Crawford
465bcce72c metadata_service: Add tests for constructing metadata 2014-06-24 15:08:03 -07:00
Alex Crawford
361edeebc6 metadata-service: Add metadata-service datasource
Move the old metadata-service datasource to url datasource. This new datasource
checks for the existance of meta-data.json and if it doesn't exist, walks the
meta-data directory to build a metadata blob.
2014-06-24 15:08:03 -07:00
Jonathan Boulle
29a7b0e34f Merge pull request #155 from jonboulle/etcd
etcdenvironment: order map keys consistently
2014-06-24 15:02:04 -07:00
Alex Crawford
8496ffb53a HttpClient: Wrap errors with error classes 2014-06-24 15:01:31 -07:00
Jonathan Boulle
2c717a6cd1 Merge pull request #157 from jonboulle/core
coreos-cloudinit: clean up flag handling
2014-06-24 14:57:18 -07:00
Jonathan Boulle
13a91c9181 coreos-cloudinit: clean up flag handling 2014-06-24 14:19:55 -07:00
Jonathan Boulle
338e1b64ab etcdenvironment: order map keys consistently 2014-06-23 15:13:11 -07:00
Alex Crawford
8eb0636034 Merge pull request #149 from crawford/network
feat(network): Add support for blind interfaces, support for hwaddress, and bug fixes
2014-06-20 17:53:37 -07:00
Alex Crawford
f7c25a1b83 doc(debian-interfaces): Add basic docs for convert-netconf 2014-06-20 17:51:57 -07:00
Alex Crawford
d6a0d0908c fix(network): Generate prefixes to ensure proper lexicographical ordering
In order for networkd to properly configure the network interfaces, the configs must be
prefixed to ensure that they load in the correct order (parent interfaces have a lower
prefix than their children).
2014-06-20 17:51:57 -07:00
Alex Crawford
5c89afc18a Merge pull request #152 from crawford/metadata
feat(meta_data): Add partial support for meta_data.json
2014-06-20 17:47:08 -07:00
Michael Marineau
376cc4bcac chore(coreos-cloudinit): bump to 0.7.7+git 2014-06-18 15:01:13 -07:00
Michael Marineau
d0a6d6f92f chore(coreos-cloudinit): bump to 0.7.7 2014-06-18 14:55:38 -07:00
Michael Marineau
2be1e52f32 Merge pull request #151 from marineam/mount
fix(configdrive): Use mount units, give virtfs a new mount point.
2014-06-18 13:51:11 -07:00
Michael Marineau
784a71e2bf fix(configdrive): Use mount units, give virtfs a new mount point.
Currently systemd cannot track dependencies on configdrive very well
because it is mounted via a service instead of a mount unit. Also since
the interaction between path and mount units can lead to unexpected
behavior if something goes wrong the cloudinit service is now triggered
explicitly by the mount again. The configdrive path unit remains only as
a fall back for containers where the mount unit doesn't kick in. Better
to have two mechanisms that trigger the cloudinit service than none. :)

Since mounting a virtfs based configdrive requires different mount
options and two different mount units cannot refer to the same path the
virtfs version now mounts to /media/configvirtfs.

There are also two new kernel options:
- `coreos.configdrive=1`: enable config drive on physical hardware.
- `coreos.configdrive=0`: disable config drive on virtual machines.
2014-06-18 13:01:19 -07:00
Alex Crawford
e6cf83a2e5 refactor(netconf): Move netconf processing and handle metadata 2014-06-18 12:43:41 -07:00
Alex Crawford
840c208b60 feat(metadata): Distinguish between userdata and metadata for datasources 2014-06-18 12:34:31 -07:00
Alex Crawford
29ed6b38bd refactor(env): Add the config root and netconf type to datasource and env 2014-06-18 12:27:15 -07:00
Alex Crawford
259c7e1fe2 fix(sshKeyName): Use the SSH key name provided 2014-06-18 11:47:17 -07:00
Alex Crawford
033c8d352f feat(network): Add support for hwaddress
Currently only supports the ether mode of hwaddress. No immediate plans
to support ax25, ARCnet, or netrom.
2014-06-14 21:30:14 -07:00
Alex Crawford
16d7e8af48 fix(network): Take down all interfaces properly
The map of interfaces wasn't being populated correctly. Also, clean up some prints.
2014-06-13 20:53:59 -07:00
Alex Crawford
159f4a2c7c feat(network): Add support for blind interfaces
It is valid for an interface to reference another, otherwise undeclared,
interface (i.e. a bond enslaves eth0 without eth0 having its own iface stanza).
In order to associate the two interfaces, the undeclared interface needs to be
implicitly created so that it can be referenced by the other. This adds the
capability to forward-declare interfaces in addition to cleaning up the
process a little bit.
2014-06-11 22:07:33 -07:00
Michael Marineau
160668284c chore(coreos-cloudinit): bump to 0.7.6+git 2014-06-07 16:04:33 -04:00
Michael Marineau
41b9dfcb1c chore(coreos-cloudinit): bump to 0.7.6 2014-06-07 16:01:31 -04:00
Michael Marineau
ef4c3483b6 Merge pull request #146 from marineam/fix
fix(update): Fix restart of update-engine
2014-06-07 13:04:49 -07:00
Michael Marineau
4bdf633075 fix(update): Fix restart of update-engine
The name was missing .service.
2014-06-07 12:08:22 -07:00
Brian Waldon
c9fc718e18 Merge pull request #145 from bcwaldon/drop-group-req
Relax requirements of update group value
2014-06-06 11:43:22 -07:00
Brian Waldon
4461b3d33d fix(update): Relax requirements of update group value 2014-06-06 11:29:09 -07:00
Jonathan Boulle
c6a1412f6b chore(coreos-cloudinit): bump to 0.7.5+git 2014-06-06 11:14:39 -07:00
Jonathan Boulle
d0cbbd2007 chore(coreos-cloudinit): bump to 0.7.5 2014-06-06 11:10:48 -07:00
Jonathan Boulle
7b5e542eb4 Merge pull request #132 from jonboulle/locksmith
reboot-strategy=off breaks subsequent reboot strategies
2014-06-06 11:08:06 -07:00
Jonathan Boulle
376d82ba63 doc(*): add note about runtime locksmithd unit file 2014-06-06 10:55:42 -07:00
Jonathan Boulle
a6aa9f82b8 fix(systemd): unmask runtime units when mask=False 2014-06-06 10:55:42 -07:00
Jonathan Boulle
00ee047753 fix(locksmith): use a runtime unit for locksmith 2014-06-06 10:55:42 -07:00
Jonathan Boulle
f127406d01 Merge pull request #140 from jonboulle/atomic
fix(system): write all files atomically
2014-06-06 10:37:09 -07:00
Jonathan Boulle
0ddc08d55a fix(system): write all files atomically 2014-06-06 10:36:36 -07:00
Jonathan Boulle
56f455f890 Merge pull request #141 from jonboulle/141
cloudinit doesn't restart update-engine.service
2014-06-06 10:25:24 -07:00
Jonathan Boulle
dd861b9f88 fix(initialize): ensure update-engine is restarted after group/server
changes
2014-06-05 16:12:40 -07:00
Alex Crawford
f7d01da267 Merge pull request #138 from spkane/github-ent-key-docs
Add a valid URL example for Github Enterprise token based API auth
2014-06-04 16:15:04 -07:00
Sean P. Kane
fc8f30bf08 Add a valid URL example for Github Enterprise token based API auth 2014-06-04 16:03:02 -07:00
Brandon Philips
075c0557e7 Merge pull request #137 from robszumski/patch-1
fix(docs): remove unneeded install section
2014-06-04 14:22:55 -07:00
Rob Szumski
d25e13a2c6 fix(docs): remove unneeded install section 2014-06-04 13:57:18 -07:00
Alex Crawford
cf1ffad533 chore(coreos-cloudinit): bump to 0.7.4+git 2014-06-03 14:14:47 -07:00
Alex Crawford
82706b1d5f chore(coreos-cloudinit): bump to 0.7.4 2014-06-03 14:13:56 -07:00
Alex Crawford
38c8fda0d1 Merge pull request #124 from crawford/networkd
feat(networkd): Adding support for debian-interface-to-networkd conversion
2014-06-03 13:55:06 -07:00
Alex Crawford
69240a7e39 feat(systemd): Update the systemd unit files to use configdrive
This makes it so that /media/configdrive can be used for user-data
and network configs.
2014-06-02 18:43:22 -07:00
Brian Waldon
c4f1996843 fix(doc): Correct spacing in cloud-config.md 2014-06-02 16:49:44 -07:00
Alex Crawford
48df1be793 feat(convertNetconf): Add support for network config conversion
Adding the flag -convertNetconf which is used to specify the config
format to convert from (right now, only 'debian' is supported).
Once the network configs are generated, they are written to
systemd's runtime network directory and the network is restarted.
2014-06-02 15:31:30 -07:00
Alex Crawford
79a40a38d8 add(netlink): import dotcloud/docker/pkg/netlink 2014-06-02 15:31:30 -07:00
Alex Crawford
856061b445 test(interfaces): Add tests for network conversion
These tests should be an exhaustive set of tests for the parsing
of Debian interface files and generation of equivilent networkd
config files.
2014-06-02 15:31:27 -07:00
Alex Crawford
38321fedce feat(interfaces): Add support for interfaces file
This adds the ability for cloudinit to parse a debian interfaces
file and generate the coresponding networkd configs.
2014-06-02 15:30:37 -07:00
Alex Crawford
f8a823cf7e refactor(userdata): Move userdata processing into a function 2014-06-02 14:59:01 -07:00
Alex Crawford
a4035cffea feat(config-drive): Add support for reading user-data from config-drive
The -config-drive flag tells cloudinit to read the user-data from
within the config-drive (./openstack/latest/user-data).
2014-06-02 14:58:57 -07:00
Brian Waldon
5c8fb7f465 fix(doc): Add newlines for proper formatting 2014-06-02 11:42:43 -07:00
Alex Crawford
7a02bf54ed Merge pull request #130 from crawford/docs
fix(docs): Fix minor typo describing runtime field for units
2014-05-30 11:52:30 -07:00
Alex Crawford
388dd67388 fix(docs): Fix minor typo describing runtime field for units 2014-05-30 11:45:44 -07:00
Jonathan Boulle
ded6d94180 chore(coreos-cloudinit): bump to 0.7.3+git 2014-05-29 14:55:34 -07:00
Jonathan Boulle
a9a910b5c4 chore(coreos-cloudinit): bump to 0.7.3 2014-05-29 14:52:58 -07:00
Jonathan Boulle
8e94b4140a Merge pull request #122 from jonboulle/122
ec2-cloudinit service fails after reboot with "reboot-strategy: off"
2014-05-29 14:25:58 -07:00
Jonathan Boulle
cd322863e9 Merge pull request #129 from jonboulle/exp
fix(pkg): simplify exponential backoff to avoid overflows
2014-05-29 14:02:47 -07:00
Jonathan Boulle
786e4bef65 fix(systemd): remove any existing unit when calling mask 2014-05-29 13:59:55 -07:00
Jonathan Boulle
269a658d4b fix(pkg): simplify exponential backoff to avoid overflows 2014-05-29 11:11:18 -07:00
Michael Marineau
e317c7eb9a chore(coreos-cloudinit): bump to 0.7.2+git 2014-05-27 14:02:11 -07:00
Michael Marineau
974de943e0 chore(coreos-cloudinit): bump to 0.7.2 2014-05-27 13:37:58 -07:00
Jonathan Boulle
db3f008543 Merge pull request #127 from jonboulle/127
"Enable" option does not support units in /usr/lib64/systemd
2014-05-26 15:24:30 -07:00
Jonathan Boulle
b04509ae54 fix(systemd): EnableUnitFile unit name rather than absolute destination 2014-05-26 15:16:24 -07:00
Jonathan Boulle
6c07e8784f Merge pull request #125 from jonboulle/no_locksmith_enable
Dies trying to enable non-existent /etc/systemd/system/locksmithd.service
2014-05-26 13:11:47 -07:00
Jonathan Boulle
60ab4222de fix(update): locksmith service does not need disabling/enabling 2014-05-26 12:33:23 -07:00
Brandon Philips
1a295f65c7 Merge pull request #123 from c4milo/shared-http-client
feat(util/http_client): Adds generic HTTP client
2014-05-22 14:37:32 -07:00
Camilo Aguilar
cec0926c5c fix(pkg/http_client): Printf is smarter than you think
Printf determines what the duration unit is
and prints it accordingly.
2014-05-22 14:53:54 -04:00
Camilo Aguilar
8ca3c2ed1f style(httpbackoff -> pkg): Adjusts package name to follow convention 2014-05-22 14:37:19 -04:00
Camilo Aguilar
2cedebb4eb style(util->httpbackoff): Changes package as per @philips suggestion 2014-05-21 21:12:16 -04:00
Camilo Aguilar
3e00a37ef5 feat(util/http_client): Adds generic HTTP client
Supports retries with exponential backoff as well as connection
timeouts and the ability to skip SSL/TLS verification.

This commit also refactors datasource and initialize packages
in order to use the new HTTP client.
2014-05-21 13:31:50 -04:00
Jonathan Boulle
59d1eba423 Merge pull request #111 from namsral/patch-1
Trim newlines from the cloud-config-url option
2014-05-21 10:18:24 -07:00
Jonathan Boulle
af69149260 Merge pull request #120 from brianredbeard/pr20-fix
fix(docs) Clear description of update server changes
2014-05-21 10:01:25 -07:00
Brandon Philips
5fa2ad8dfd Merge pull request #121 from iamveen/master
removed tricky space from cloud-config header
2014-05-21 05:33:05 -07:00
Lars Wiegman
513a1eb602 Trim newlines from the cloud-config-url kernel parameter and added a test
- In the Fetch function trim whitespace from /proc/cmdline
- New test for Fetch function
- Added Location field to the procCmdline struct for testing
2014-05-21 11:09:39 +02:00
Gavin Dunne
5189e1594e removed tricky space from cloud-config header 2014-05-21 01:22:09 -07:00
Brian 'Redbeard' Harrington
8b5bc47429 fix(doc) more sensible ordering
It makes a bit more sense to specify the scope of the section
before getting into details about how it's done.
2014-05-20 23:29:56 -07:00
Brian 'Redbeard' Harrington
a64fcd2893 fix(docs) Clear description of update server changes TBD
Pulling in @philips' changes from coreos/coreos-cloudinit#6 after
trashing PR coreos/coreos-cloudinit#20.  Cleanup of that PR was
beyond my git-fu.

cc @jonboulle
2014-05-20 22:53:29 -07:00
Brandon Philips
5b1145c044 Merge pull request #118 from c4milo/log-timestamp-fix
chore(logging): Removes duplicated timestamp during booting
2014-05-17 16:31:07 -07:00
Michael Marineau
a49877b99f chore(coreos-cloudinit): bump to 0.7.1+git 2014-05-16 21:23:34 -07:00
Michael Marineau
24f181f7a3 chore(coreos-cloudinit): bump to 0.7.1 2014-05-16 21:21:47 -07:00
Michael Marineau
61e70fcce8 Merge pull request #119 from marineam/container
container and panic fixes
2014-05-16 21:19:43 -07:00
Michael Marineau
ea6262f0ae fix(etcd): fix runtime panic when etcd section is missing.
The etcd code tries to assign ee["name"] even when the map was never
defined and assigning to an uninitialized map causes a panic.
2014-05-16 20:38:49 -07:00
Michael Marineau
f83ce07416 feat(units): Add generic cloudinit path unit
Switch to triggering common user configs via a path unit. This is
particularly useful for config drive so that a config drive can be
mounted by something other than the udev triggered services, a bind
mount when running in a container for example.
2014-05-16 20:38:49 -07:00
Brandon Philips
140682350d chore(coreos-cloudinit): bump to 0.7.0+git 2014-05-16 18:22:22 -07:00
Brandon Philips
289ada4668 chore(coreos-cloudinit): bump to 0.7.0 2014-05-16 18:22:22 -07:00
Camilo Aguilar
5d58c6c1c1 chore(logging): Removes duplicated timestamp during booting 2014-05-16 17:35:31 -04:00
Jonathan Boulle
d95df78c6d Merge pull request #117 from c4milo/travis-support
chore(travis): Adds travis yaml file as well as badge in README
2014-05-16 14:11:37 -07:00
Camilo Aguilar
ac4c969454 chore(travis): Adds travis yaml file and badge in README 2014-05-16 17:09:59 -04:00
Jonathan Boulle
04fcd3935f Merge pull request #114 from c4milo/fetch-url-refactor
refactor(datastore/fetch): Makes more failure proof fetching user-data files.
2014-05-16 14:03:54 -07:00
Camilo Aguilar
36efcc9d69 test(datastore/fetch): Makes sure err is not nil 2014-05-16 16:57:58 -04:00
Jonathan Boulle
f7ecc2461c Merge pull request #109 from jonboulle/fleet
fix(docs): add documentation for fleet section
2014-05-16 13:38:12 -07:00
Jonathan Boulle
8df9ee3ca2 Merge pull request #115 from burke/master
Response body must not be closed if request error'd.
2014-05-16 13:20:27 -07:00
Burke Libbey
321ceaa0da Response body must not be closed if request error'd. 2014-05-16 15:42:11 -04:00
Jonathan Boulle
05daad692e fix(docs): add documentation for fleet section 2014-05-16 12:10:21 -07:00
Camilo Aguilar
4b6fc63e8c fix(datastore/fetch): off-by-one oversight 2014-05-16 12:36:05 -04:00
Camilo Aguilar
fcccfb085f style(datastore/fetch): Adjusts comments formatting 2014-05-16 12:35:39 -04:00
Camilo Aguilar
ebf134f181 refactor(datastore/fetch): Makes more failure proof fetching user-data files
- Adds URL validations
- Adds timeout support for http client
- Limits the amount of retries to not spin forever
- Fails faster if response status code is 4xx
- Does a little bit more of logging
- Adds more tests
2014-05-16 12:35:06 -04:00
Jonathan Boulle
51d77516a5 Merge pull request #90 from jonboulle/90
Warn or error on unrecognized keys in cloud-config.yml
2014-05-15 18:53:48 -07:00
Jonathan Boulle
98f5ead730 fix(*): catch more unknown keys in user and file sections 2014-05-15 18:53:17 -07:00
Jonathan Boulle
81fe0dc9e0 fix(initialize): also check for unknown coreos keys 2014-05-15 18:53:17 -07:00
Jonathan Boulle
e852be65f7 feat(*): warn on encountering unrecognized keys in cloud-config 2014-05-15 18:53:17 -07:00
Brandon Philips
0a16532d4b Merge pull request #113 from c4milo/exponential_backoff
Exponential backoff with sleep capping
2014-05-15 10:16:42 -07:00
Camilo Aguilar
ff70a60fbc Adds sleep cap to exponential backoff so it does not go too high 2014-05-15 13:04:37 -04:00
Kelsey Hightower
31f61d7531 Use exponential backoff when fetching user-data from an URL.
The user-cloudinit-proc-cmdline systemd unit is responsible for
fetching user-data from various sources during the cloud-init
process. When fetching user-data from an URL datasource we face
a race condition since the network may not be available, which
can cause the job to fail and no further attempts to fetch the
user-data are made.

Eliminate the race condition when fetching user-data from an URL
datasource. Retry the fetch using an exponential backoff until
the user-data is retrieved.

Fixes issue 105.
2014-05-14 23:15:49 -07:00
Jonathan Boulle
b505e6241c Merge pull request #103 from jonboulle/20
feat(*): add more configuration options for update.conf
2014-05-14 13:14:35 -07:00
Jonathan Boulle
e413a97741 feat(update): add more configuration options for update.conf 2014-05-14 13:13:19 -07:00
Jonathan Boulle
41cbec8729 Merge pull request #101 from jonboulle/fleet
feat(*): add basic fleet configuration to cloud-config
2014-05-14 12:28:52 -07:00
Jonathan Boulle
919298e545 feat(fleet): add basic fleet configuration to cloud-config 2014-05-14 12:28:20 -07:00
Jonathan Boulle
ae424b5637 Merge pull request #106 from jonboulle/locksmith_to_update
refactor(init): rename locksmith to update
2014-05-14 11:50:09 -07:00
Jonathan Boulle
e93911344d refactor(init): rename locksmith to update 2014-05-14 11:40:39 -07:00
Jonathan Boulle
32c52d8729 Merge pull request #100 from jonboulle/rework
refactor(*): rework cloudconfig for better extensibility and consistency
2014-05-14 11:39:53 -07:00
Jonathan Boulle
cdee32d245 refactor(systemd): don't allow users to set DropIn=true yet 2014-05-14 11:34:13 -07:00
Jonathan Boulle
31cfad91e3 refactor(*): rework cloudconfig for better extensibility and consistency
This change creates a few simple interfaces for coreos-specific
configuration options and moves things to them wherever possible; so if
an option needs to write a file, or create a unit, it is acted on
exactly the same way as every other file/unit that needs to be written
during the cloud configuration process.
2014-05-14 11:34:07 -07:00
Brian Waldon
e814b37839 Merge pull request #107 from bcwaldon/locksmith-no-etc
fix(coreos-cloudinit): Ensure /etc/coreos exists before writing to it
2014-05-14 10:49:23 -07:00
Brian Waldon
cb4d9e81a4 fix(coreos-cloudinit): Ensure /etc/coreos exists before writing to it 2014-05-14 10:47:18 -07:00
Jonathan Boulle
b87a4628e6 Merge pull request #99 from jonboulle/simple
chore(cloudinit): remove superfluous check
2014-05-12 10:51:51 -07:00
Jonathan Boulle
b22fdd5ac9 Merge pull request #104 from jonboulle/tests
feat(tests): add coverage script
2014-05-12 10:51:38 -07:00
Jonathan Boulle
6939fc2ddc feat(tests): add cover script 2014-05-10 01:42:57 -07:00
Jonathan Boulle
e3117269cb chore(cloudinit): remove superfluous check 2014-05-09 20:32:51 -07:00
Brandon Philips
3bb3a683a4 Merge pull request #98 from philips/remove-oem-from-default
chore(Documentation): move OEM into its own doc
2014-05-08 09:41:42 -07:00
Brandon Philips
e1033c979e chore(Documentation): move OEM into its own doc
People are customizing the OEM needlessly. Just move it into its own
doc.
2014-05-08 09:32:21 -07:00
Jonathan Boulle
9a4d24826f Merge pull request #80 from jonboulle/master
users[i].primary-group option seems invalid
2014-05-07 21:12:45 -07:00
Jonathan Boulle
7bed1307e1 fix(user): user correct primary group flag for useradd 2014-05-07 14:06:51 -07:00
Brandon Philips
47b536532d chore(coreos-cloudinit): version +git 2014-05-06 21:09:40 -07:00
Brandon Philips
7df5cf761e chore(coreos-cloudinit): bump to 0.6.0
The major feature in this release is coreos.update.reboot-strategy
2014-05-06 21:05:42 -07:00
Brandon Philips
799c02865c Merge pull request #96 from philips/locksmith-support
Add locksmith support v2
2014-05-06 21:00:44 -07:00
Brandon Philips
9f38792d43 fix(initialize): use REBOOT_STRATEGY in update.conf
Change from STRATEGY to REBOOT_STRATEGY and update the function names to
reflect that this is a config now.
2014-05-06 20:57:29 -07:00
Alex Polvi
7e4fa423e4 feat(initialize): add locksmith configuration
configure locksmith strategy based on the cloud config.
2014-05-06 20:57:28 -07:00
Brandon Philips
c3f17bd07b feat(system): add MaskUnit to systemd 2014-05-06 17:46:16 -07:00
Brandon Philips
85a473d972 Merge pull request #95 from philips/various-code-cleanups
chore(initialize): code cleanups and gofmt
2014-05-06 16:19:35 -07:00
Brandon Philips
aea5ca5252 chore(initialize): code cleanups and gofmt 2014-05-06 16:13:21 -07:00
Michael Marineau
4e84180ad5 chore(release): Bump version to v0.5.2+git 2014-05-05 14:09:08 -07:00
Michael Marineau
0f1717bf26 chore(release): Bump version to v0.5.2 2014-05-05 14:07:50 -07:00
Michael Marineau
6a9aa60a8d Merge pull request #93 from marineam/reload
Revert "fix(units): Drop automatic daemon-reload"
2014-05-05 14:02:16 -07:00
Michael Marineau
7cacb2e127 Revert "fix(units): Drop automatic daemon-reload"
daemon-reload should be fixed now and the latest CoreOS with locksmith
is causing the etcd unit to get lazy-loaded before all the cloudinit
processes have finished configuring etcd via dropin files. In short,
the luck we were relying on to get by without daemon-reload has
officially run out. Cross your fingers!

This reverts commit 580460ff3f.
2014-05-05 13:16:07 -07:00
Brian Waldon
1f688dcdca Merge pull request #92 from bcwaldon/crlf-test
test(crlf): Add test that parses user-data with carriage returns
2014-05-05 10:50:25 -07:00
Brian Waldon
f6d8190e8f test(crlf): Add test that parses user-data with carriage returns 2014-05-05 10:49:02 -07:00
Brandon Philips
3263816cf5 Merge https://github.com/coreos/template-project 2014-05-05 09:44:59 -07:00
Michael Marineau
96e1cb5a7a Merge pull request #89 from robszumski/doc-write-files
feat(docs): include write_files example
2014-04-29 11:26:44 -07:00
Rob Szumski
cf556d2a81 feat(docs): include write_files example 2014-04-29 11:17:22 -07:00
Jonathan Boulle
62bda8e6cc Merge pull request #88 from robszumski/master
fix(docs): start the example unit
2014-04-29 12:15:44 -06:00
Rob Szumski
0d1d1f77be fix(docs): start the example unit 2014-04-28 10:57:11 -07:00
Michael Marineau
a7e21747fa Merge pull request #87 from marineam/hack
fix(configdrive): Always run after OEM and ec2 metadata.
2014-04-23 14:54:19 -07:00
Michael Marineau
26b54534d6 fix(configdrive): Always run after OEM and ec2 metadata.
A workaround for https://github.com/coreos/coreos-cloudinit/issues/86

Longer term cloudinit needs to be fixed to not corrupt the system when
multiple config sources are being used. We've pretty much gotten this
far without this coming up because most configs don't conflict so badly.
2014-04-23 14:38:54 -07:00
Brian Waldon
8201d75115 chore(release): Bump version to v0.5.1+git 2014-04-22 18:22:35 -07:00
Brian Waldon
1d024af4c1 chore(release): Bump version to v0.5.1 2014-04-22 18:22:24 -07:00
Brian Waldon
09c690cbe7 Merge pull request #85 from bcwaldon/pxe-unit
feat(proc-cmdline): Add proc-cmdline unit
2014-04-22 18:21:51 -07:00
Brian Waldon
49adf19081 feat(proc-cmdline): Add proc-cmdline unit
This unit will always be started, but will only do anything if
a `cloud-config-url=<url>` token is provided in /proc/cmdline.
2014-04-22 17:56:52 -07:00
Brian Waldon
46b046c82e chore(release): Bump version to v0.5.0+git 2014-04-22 16:48:32 -07:00
Brian Waldon
e64b61b312 chore(release): Bump version to v0.5.0 2014-04-22 16:48:21 -07:00
Brian Waldon
d72e10125a Merge pull request #84 from bcwaldon/proc-cmdline
feat(proc-cmdline): Parse /proc/cmdline for cloud-config-url
2014-04-22 16:43:05 -07:00
Brian Waldon
3de3d2c050 feat(proc-cmdline): Parse /proc/cmdline for cloud-config-url
If the --from-proc-cmdline flag is given to coreos-cloudinit, the local
/proc/cmdline file will be parsed for a cloud-config-url
2014-04-22 16:38:01 -07:00
Brian Waldon
2ff0762b0c Merge pull request #83 from robszumski/correct-headers
docs(cloud-config): correct headers
2014-04-21 19:15:50 -07:00
Rob Szumski
d6bacb24bc docs(cloud-config): correct headers 2014-04-21 17:56:35 -07:00
Brian Waldon
926eb4dbb7 Merge pull request #77 from chexxor/master
Update cloud-config.md to include expected file format
2014-04-21 14:27:22 -07:00
Brian Waldon
e7599fea58 Merge pull request #82 from bcwaldon/fix-68
fix(userdata): Strip \r when checking header
2014-04-21 14:26:31 -07:00
Brian Waldon
e98c58c656 fix(userdata): Strip \r when checking header
Fix #68
2014-04-21 13:40:26 -07:00
Alex Berg
ae350a3b34 Update cloud-config.md - use "you" 2014-04-18 11:45:02 -05:00
Alex Berg
c3b53f24cf Update cloud-config.md to use "parameter", not "option" 2014-04-18 11:45:01 -05:00
Alex Berg
8bee85e63d Update cloud-config.md based on feedback 2014-04-18 11:43:54 -05:00
Alex Berg
4c02e99bc8 Update cloud-config.md option descriptions
Re-word a few more things to look more like docs.
2014-04-18 11:43:53 -05:00
Alex Berg
0fb5291dd0 Update cloud-config.md to include expected file format
Clarify root-level keys. Use page structure to indicate expected values.
2014-04-18 11:43:53 -05:00
Brian Waldon
7f55876378 Merge pull request #79 from robszumski/note-config-drive
feat(docs): note config-drive
2014-04-17 09:36:57 -07:00
Brian Waldon
eb51a89f78 Merge pull request #72 from bcwaldon/unit-enable
Address unit enabling issues
2014-04-17 09:32:47 -07:00
Rob Szumski
588ff4c26c feat(docs): note config-drive 2014-04-16 22:35:39 -07:00
Michael Marineau
5472de8821 Merge pull request #78 from robszumski/update-user-group
fix(docs): use better group example
2014-04-16 16:49:50 -07:00
Rob Szumski
e6b632f817 fix(docs): use better group example 2014-04-16 16:48:04 -07:00
Michael Marineau
13a3d892ca Merge pull request #76 from marineam/units2
fix(units): Relax ordering requirements for now.
2014-04-15 15:19:13 -07:00
Brian Waldon
2e237ebead Merge pull request #66 from bcwaldon/doc-encoding
doc(write_files): Explicitly document lack of encoding support
2014-04-15 10:06:14 -07:00
Brian Waldon
61bb63b6e6 feat(unit): Allow units to be enabled even if contents not provided 2014-04-15 09:00:53 -07:00
Brian Waldon
476761cf62 refactor(unit): Separate UnitDestination from PlaceUnit 2014-04-15 09:00:53 -07:00
Brian Waldon
5981e12ac0 feat(unit): Allow user to control enabling units
Fix #69 - A user may provide an `enable` attribute of a unit in their
cloud config document. If true, coreos-cloudinit will instruct systemd
to enable the associated unit. If false, the unit will not be enabled.

Fix #71 - The default enable behavior has been changed from on to off.
2014-04-15 09:00:52 -07:00
Michael Marineau
78d8be8427 fix(units): Relax ordering requirements for now.
The current cloudinit implementation blocks when starting units which
causes it to deadlock the boot process if a system cloud config starts a
user cloud config because the user configs want to run after system is
done. Until cloudinit switches to non-blocking calls user configs will
go back to just depending on coreos-setup-environment.service.
2014-04-14 21:39:40 -04:00
Michael Marineau
10d73930d9 Merge pull request #62 from marineam/units
add(units): Generic config drive and other systemd units.
2014-04-11 13:26:59 -07:00
Brandon Philips
ea12c0bfd1 Merge pull request #67 from robszumski/remove_disco
fix(docs): remove real discovery token
2014-04-09 21:56:26 -07:00
Rob Szumski
6540d12d25 fix(docs): remove real discovery token 2014-04-09 21:55:19 -07:00
Michael Marineau
c438a42587 feat(units): Generic config drive and other systemd units. 2014-04-09 19:10:07 -07:00
Brian Waldon
19f8fe49af doc(write_files): Explicitly document lack of encoding support 2014-04-08 08:34:39 -07:00
Michael Marineau
58b091061e Merge pull request #57 from marineam/passwd
fix(user): Use '*' as default password field rather than '!'
2014-04-07 14:13:25 -07:00
Brian Waldon
8a7df360ac Merge pull request #65 from bcwaldon/hosts-newline
fix(manage_etc_hosts): Append newline to /etc/hosts
2014-04-07 11:09:27 -07:00
Brian Waldon
ba7cf90315 fix(manage_etc_hosts): Append newline to /etc/hosts 2014-04-07 11:01:17 -07:00
Brian Waldon
8841740a2b doc(oem): remove quotes from oem doc 2014-04-07 10:58:13 -07:00
Brian Waldon
dfe1255ac3 chore(release): Bump version to v0.4.0+git 2014-04-07 10:23:58 -07:00
Brian Waldon
0fddd1735d chore(release): Bump version to v0.4.0 2014-04-07 10:23:28 -07:00
Brandon Philips
f779a3f7f5 Merge pull request #64 from philips/no-quotes-on-oem-id-or-version
fix(initialize): don't quote version or ID
2014-04-07 10:17:29 -07:00
Brandon Philips
7015338aef fix(initialize): don't quote version or ID
The update_engine parsing and XML generation code is very naive. Instead
of trying to implement a correct parser and generater in C++ just
generate a file that doesn't have quote's around fields that we know
won't have spaces.
2014-04-07 09:56:57 -07:00
Jonathan Boulle
e01a1f70c3 Merge pull request #2 from jonboulle/master
Clean up CONTRIBUTING.md and other bits of template-project
2014-04-04 10:41:40 -07:00
Jonathan Boulle
2e4ea503b0 chore(contributing): clean up CONTRIBUTING.md and split out DCO 2014-04-04 10:40:37 -07:00
Brian Waldon
34aa147ebe Merge pull request #58 from gabrtv/manage_etc_hosts
feat(etc-hosts) add support for manage_etc_hosts: localhost
2014-04-02 23:11:03 -07:00
Gabriel Monroy
4d02e1da8e feat(etc-hosts) add support for manage_etc_hosts: localhost
This feature is based on https://github.com/number5/cloud-init/blob/master/doc/examples/cloud-config.txt#L447:L482
2014-04-01 16:02:12 -06:00
Michael Marineau
5ef3e1f32b fix(user): Use '*' as default password field rather than '!'
When using openssh without pam it checks for a ! prefix in the password
field, locking the account entirely if found. The other common lock
character, *, is allowed by ssh to login via ssh keys so use it instead.
2014-03-31 22:20:02 -07:00
polvi
23d02363ee Merge pull request #56 from cbmd/patch-1
Fixed indentation for users creation example
2014-03-28 08:40:34 -07:00
Vadym Okun
3c4fe9e260 Fixed indentation for users creation example 2014-03-28 13:23:58 +02:00
Brian Waldon
a594e053f5 chore(doc): clean up formatting 2014-03-27 20:19:42 -07:00
Brian Waldon
f3ba47ac89 Merge pull request #48 from calavera/key_import_url
feat(ssh-import): Add ssh-import-url user attribute.
2014-03-27 20:16:10 -07:00
David Calavera
7d814396b7 feat(ssh-import): Add ssh-import-url user attribute. 2014-03-28 09:39:47 +08:00
Brian Waldon
47ca113385 chore(release): Bump version to v0.3.2+git 2014-03-27 18:14:24 -07:00
Brian Waldon
639c693153 chore(release): Bump version to v0.3.2 2014-03-27 18:14:16 -07:00
Brian Waldon
b4027077ff Merge pull request #55 from bcwaldon/drop-reload
fix(units): Drop automatic daemon-reload
2014-03-27 18:12:22 -07:00
Brian Waldon
580460ff3f fix(units): Drop automatic daemon-reload 2014-03-27 17:30:05 -07:00
Brian Waldon
b246ec0397 chore(release): Bump version to v0.3.1+git 2014-03-25 20:06:19 -07:00
Brian Waldon
4977c774d8 chore(release): Bump version to v0.3.1 2014-03-25 20:06:07 -07:00
Brian Waldon
661bae11fc Merge pull request #53 from bcwaldon/fix-reload
Fix systemd daemon-reload
2014-03-25 20:04:24 -07:00
Brian Waldon
58ae898948 fix(systemd): Update usage of dbus.Reload 2014-03-25 19:37:05 -07:00
Brian Waldon
f5f9a0a6a9 bump(github.com/coreos/go-systemd/dbus): 4fbc5060a317b142e6c7bfbedb65596d5f0ab99b 2014-03-25 19:37:05 -07:00
Brian Waldon
477ae29135 fix(systemd): Fail if daemon-reload returns error 2014-03-25 18:50:48 -07:00
Brian Waldon
0203d4a9f3 chore(release): Bump version to v0.3.0+git 2014-03-24 18:03:45 -07:00
Brandon Philips
c7aef5fdf2 Merge pull request #1 from bcwaldon/fix-case
fix(CONTRIBUTING.md): Fix title case
2014-02-05 15:52:24 -08:00
Brian Waldon
c4605160c5 fix(CONTRIBUTING.md): Fix title case 2014-02-05 15:51:24 -08:00
Brandon Philips
054de85da2 feat(*): initial commit 2014-01-19 12:25:11 -08:00
234 changed files with 14996 additions and 2473 deletions

3
.gitignore vendored
View File

@@ -1,3 +1,4 @@
*.swp
bin/
pkg/
coverage/
gopath/

11
.travis.yml Normal file
View File

@@ -0,0 +1,11 @@
language: go
go:
- 1.3
- 1.2
install:
- go get code.google.com/p/go.tools/cmd/cover
- go get code.google.com/p/go.tools/cmd/vet
script:
- ./test

68
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,68 @@
# How to Contribute
CoreOS projects are [Apache 2.0 licensed](LICENSE) and accept contributions via
GitHub pull requests. This document outlines some of the conventions on
development workflow, commit message formatting, contact points and other
resources to make it easier to get your contribution accepted.
# Certificate of Origin
By contributing to this project you agree to the Developer Certificate of
Origin (DCO). This document was created by the Linux Kernel community and is a
simple statement that you, as a contributor, have the legal right to make the
contribution. See the [DCO](DCO) file for details.
# Email and Chat
The project currently uses the general CoreOS email list and IRC channel:
- Email: [coreos-dev](https://groups.google.com/forum/#!forum/coreos-dev)
- IRC: #[coreos](irc://irc.freenode.org:6667/#coreos) IRC channel on freenode.org
## Getting Started
- Fork the repository on GitHub
- Read the [README](README.md) for build and test instructions
- Play with the project, submit bugs, submit patches!
## Contribution Flow
This is a rough outline of what a contributor's workflow looks like:
- Create a topic branch from where you want to base your work (usually master).
- Make commits of logical units.
- Make sure your commit messages are in the proper format (see below).
- Push your changes to a topic branch in your fork of the repository.
- Make sure the tests pass, and add any new tests as appropriate.
- Submit a pull request to the original repository.
Thanks for your contributions!
### Format of the Commit Message
We follow a rough convention for commit messages that is designed to answer two
questions: what changed and why. The subject line should feature the what and
the body of the commit should describe the why.
```
environment: write new keys in consistent order
Go 1.3 randomizes the ordering of keys when iterating over a map.
Sort the keys to make this ordering consistent.
Fixes #38
```
The format can be described more formally as follows:
```
<subsystem>: <what changed>
<BLANK LINE>
<why this change was made>
<BLANK LINE>
<footer>
```
The first line is the subject and should be no longer than 70 characters, the
second line is always blank, and other lines should be wrapped at 80 characters.
This allows the message to be easier to read on GitHub as well as in various
git tools.

36
DCO Normal file
View File

@@ -0,0 +1,36 @@
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.

View File

@@ -0,0 +1,37 @@
## OEM configuration
The `coreos.oem.*` parameters follow the [os-release spec][os-release], but have been repurposed as a way for coreos-cloudinit to know about the OEM partition on this machine. Customizing this section is only needed when generating a new OEM of CoreOS from the SDK. The fields include:
- **id**: Lowercase string identifying the OEM
- **name**: Human-friendly string representing the OEM
- **version-id**: Lowercase string identifying the version of the OEM
- **home-url**: Link to the homepage of the provider or OEM
- **bug-report-url**: Link to a place to file bug reports about this OEM
coreos-cloudinit renders these fields to `/etc/oem-release`.
If no **id** field is provided, coreos-cloudinit will ignore this section.
For example, the following cloud-config document...
```yaml
#cloud-config
coreos:
oem:
id: rackspace
name: Rackspace Cloud Servers
version-id: 168.0.0
home-url: https://www.rackspace.com/cloud/servers/
bug-report-url: https://github.com/coreos/coreos-overlay
```
...would be rendered to the following `/etc/oem-release`:
```yaml
ID=rackspace
NAME="Rackspace Cloud Servers"
VERSION_ID=168.0.0
HOME_URL="https://www.rackspace.com/cloud/servers/"
BUG_REPORT_URL="https://github.com/coreos/coreos-overlay"
```
[os-release]: http://www.freedesktop.org/software/systemd/man/os-release.html

View File

@@ -1,92 +1,166 @@
# Customize with Cloud-Config
# Using Cloud-Config
CoreOS allows you to configure networking, create users, launch systemd units on startup and more. We've designed our implementation to allow the same cloud-config file to work across all of our supported platforms.
CoreOS allows you to declaratively customize various OS-level items, such as network configuration, user accounts, and systemd units. This document describes the full list of items we can configure. The `coreos-cloudinit` program uses these files as it configures the OS after startup or during runtime. Your cloud-config is processed during each boot.
Only a subset of [cloud-config functionality][cloud-config] is implemented. A set of custom parameters were added to the cloud-config format that are specific to CoreOS. An example file containing all available options can be found at the bottom of this page.
## Configuration File
The file used by this system initialization program is called a "cloud-config" file. It is inspired by the [cloud-init][cloud-init] project's [cloud-config][cloud-config] file, which is "the defacto multi-distribution package that handles early initialization of a cloud instance" ([cloud-init docs][cloud-init-docs]). Because the cloud-init project includes tools which aren't used by CoreOS, only the relevant subset of its configuration items will be implemented in our cloud-config file. In addition to those, we added a few CoreOS-specific items, such as etcd configuration, OEM definition, and systemd units.
We've designed our implementation to allow the same cloud-config file to work across all of our supported platforms.
[cloud-init]: https://launchpad.net/cloud-init
[cloud-init-docs]: http://cloudinit.readthedocs.org/en/latest/index.html
[cloud-config]: http://cloudinit.readthedocs.org/en/latest/topics/format.html#cloud-config-data
## CoreOS Parameters
### File Format
### coreos.etcd
The cloud-config file uses the [YAML][yaml] file format, which uses whitespace and new-lines to delimit lists, associative arrays, and values.
The `coreos.etcd.*` options are translated to a partial systemd unit acting as an etcd configuration file.
We can use the templating feature of coreos-cloudinit to automate etcd configuration with the `$private_ipv4` and `$public_ipv4` fields. For example, the following cloud-config document...
A cloud-config file must contain `#cloud-config`, followed by an associative array which has zero or more of the following keys:
```
- `coreos`
- `ssh_authorized_keys`
- `hostname`
- `users`
- `write_files`
- `manage_etc_hosts`
The expected values for these keys are defined in the rest of this document.
[yaml]: https://en.wikipedia.org/wiki/YAML
### Providing Cloud-Config with Config-Drive
CoreOS tries to conform to each platform's native method to provide user data. Each cloud provider tends to be unique, but this complexity has been abstracted by CoreOS. You can view each platform's instructions on their documentation pages. The most universal way to provide cloud-config is [via config-drive](https://github.com/coreos/coreos-cloudinit/blob/master/Documentation/config-drive.md), which attaches a read-only device to the machine, that contains your cloud-config file.
## Configuration Parameters
### coreos
#### etcd
The `coreos.etcd.*` parameters will be translated to a partial systemd unit acting as an etcd configuration file.
If the platform environment supports the templating feature of coreos-cloudinit it is possible to automate etcd configuration with the `$private_ipv4` and `$public_ipv4` fields. For example, the following cloud-config document...
```yaml
#cloud-config
coreos:
etcd:
name: node001
discovery: https://discovery.etcd.io/3445fa65423d8b04df07f59fb40218f8
addr: $public_ipv4:4001
peer-addr: $private_ipv4:7001
etcd:
name: node001
# generate a new token for each unique cluster from https://discovery.etcd.io/new
discovery: https://discovery.etcd.io/<token>
# multi-region and multi-cloud deployments need to use $public_ipv4
addr: $public_ipv4:4001
peer-addr: $private_ipv4:7001
```
...will generate a systemd unit drop-in like this:
```
```yaml
[Service]
Environment="ETCD_NAME=node001"
Environment="ETCD_DISCOVERY=https://discovery.etcd.io/3445fa65423d8b04df07f59fb40218f8"
Environment="ETCD_DISCOVERY=https://discovery.etcd.io/<token>"
Environment="ETCD_ADDR=203.0.113.29:4001"
Environment="ETCD_PEER_ADDR=192.0.2.13:7001"
```
For more information about the available configuration options, see the [etcd documentation][etcd-config].
Note that hyphens in the coreos.etcd.* keys are mapped to underscores.
For more information about the available configuration parameters, see the [etcd documentation][etcd-config].
_Note: The `$private_ipv4` and `$public_ipv4` substitution variables referenced in other documents are only supported on Amazon EC2, Google Compute Engine, OpenStack, Rackspace, DigitalOcean, and Vagrant._
[etcd-config]: https://github.com/coreos/etcd/blob/master/Documentation/configuration.md
### coreos.oem
#### fleet
These fields are borrowed from the [os-release spec][os-release] and repurposed
as a way for coreos-cloudinit to know about the OEM partition on this machine:
The `coreos.fleet.*` parameters work very similarly to `coreos.etcd.*`, and allow for the configuration of fleet through environment variables. For example, the following cloud-config document...
- **id**: Lowercase string identifying the OEM
- **name**: Human-friendly string representing the OEM
- **version-id**: Lowercase string identifying the version of the OEM
- **home-url**: Link to the homepage of the provider or OEM
- **bug-report-url**: Link to a place to file bug reports about this OEM
```yaml
#cloud-config
coreos-cloudinit renders these fields to `/etc/oem-release`.
If no **id** field is provided, coreos-cloudinit will ignore this section.
coreos:
fleet:
public-ip: $public_ipv4
metadata: region=us-west
```
For example, the following cloud-config document...
...will generate a systemd unit drop-in like this:
```yaml
[Service]
Environment="FLEET_PUBLIC_IP=203.0.113.29"
Environment="FLEET_METADATA=region=us-west"
```
For more information on fleet configuration, see the [fleet documentation][fleet-config].
[fleet-config]: https://github.com/coreos/fleet/blob/master/Documentation/deployment-and-configuration.md#configuration
#### flannel
The `coreos.flannel.*` parameters also work very similarly to `coreos.etcd.*` and `coreos.fleet.*`. They can be used to set enviornment variables for flanneld. Given the following cloud-config...
```yaml
#cloud-config
coreos:
flannel:
etcd-prefix: /coreos.com/network2
```
...will generate systemd unit drop-in like so:
```
[Service]
Environment="FLANNELD_ETCD_PREFIX=/coreos.com/network2"
```
For complete list of flannel configuraion parameters, see the [flannel documentation][flannel-readme].
[flannel-readme]: https://github.com/coreos/flannel/blob/master/README.md
#### update
The `coreos.update.*` parameters manipulate settings related to how CoreOS instances are updated.
These fields will be written out to and replace `/etc/coreos/update.conf`. If only one of the parameters is given it will only overwrite the given field.
The `reboot-strategy` parameter also affects the behaviour of [locksmith](https://github.com/coreos/locksmith).
- **reboot-strategy**: One of "reboot", "etcd-lock", "best-effort" or "off" for controlling when reboots are issued after an update is performed.
- _reboot_: Reboot immediately after an update is applied.
- _etcd-lock_: Reboot after first taking a distributed lock in etcd, this guarantees that only one host will reboot concurrently and that the cluster will remain available during the update.
- _best-effort_ - If etcd is running, "etcd-lock", otherwise simply "reboot".
- _off_ - Disable rebooting after updates are applied (not recommended).
- **server**: is the omaha endpoint URL which will be queried for updates.
- **group**: signifies the channel which should be used for automatic updates. This value defaults to the version of the image initially downloaded. (one of "master", "alpha", "beta", "stable")
*Note: cloudinit will only manipulate the locksmith unit file in the systemd runtime directory (`/run/systemd/system/locksmithd.service`). If any manual modifications are made to an overriding unit configuration file (e.g. `/etc/systemd/system/locksmithd.service`), cloudinit will no longer be able to control the locksmith service unit.*
##### Example
```yaml
#cloud-config
coreos:
oem:
id: rackspace
name: Rackspace Cloud Servers
version-id: 168.0.0
home-url: https://www.rackspace.com/cloud/servers/
bug-report-url: https://github.com/coreos/coreos-overlay
update:
reboot-strategy: etcd-lock
```
...would be rendered to the following `/etc/oem-release`:
#### units
```
ID="rackspace"
NAME="Rackspace Cloud Servers"
VERSION_ID="168.0.0"
HOME_URL="https://www.rackspace.com/cloud/servers/"
BUG_REPORT_URL="https://github.com/coreos/coreos-overlay"
```
The `coreos.units.*` parameters define a list of arbitrary systemd units to start after booting. This feature is intended to help you start essential services required to mount storage and configure networking in order to join the CoreOS cluster. It is not intended to be a Chef/Puppet replacement.
[os-release]: http://www.freedesktop.org/software/systemd/man/os-release.html
### coreos.units
Arbitrary systemd units may be provided in the `coreos.units` attribute.
`coreos.units` is a list of objects with the following fields:
Each item is an object with the following fields:
- **name**: String representing unit's name. Required.
- **runtime**: Boolean indicating whether or not to persist the unit across reboots. This is analagous to the `--runtime` argument to `systemd enable`. Default value is false.
- **runtime**: Boolean indicating whether or not to persist the unit across reboots. This is analogous to the `--runtime` argument to `systemctl enable`. The default value is false.
- **enable**: Boolean indicating whether or not to handle the [Install] section of the unit file. This is similar to running `systemctl enable <name>`. The default value is false.
- **content**: Plaintext string representing entire unit file. If no value is provided, the unit is assumed to exist already.
- **command**: Command to execute on unit: start, stop, reload, restart, try-restart, reload-or-restart, reload-or-try-restart. Default value is restart.
- **command**: Command to execute on unit: start, stop, reload, restart, try-restart, reload-or-restart, reload-or-try-restart. The default behavior is to not execute any commands.
- **mask**: Whether to mask the unit file by symlinking it to `/dev/null` (analogous to `systemctl mask <name>`). Note that unlike `systemctl mask`, **this will destructively remove any existing unit file** located at `/etc/systemd/system/<unit>`, to ensure that the mask succeeds. The default value is false.
- **drop-ins**: A list of unit drop-ins with the following fields:
- **name**: String representing unit's name. Required.
- **content**: Plaintext string representing entire file. Required.
**NOTE:** The command field is ignored for all network, netdev, and link units. The systemd-networkd.service unit will be restarted in their place.
@@ -94,50 +168,61 @@ Arbitrary systemd units may be provided in the `coreos.units` attribute.
Write a unit to disk, automatically starting it.
```
```yaml
#cloud-config
coreos:
units:
- name: docker-redis.service
content: |
[Unit]
Description=Redis container
Author=Me
After=docker.service
units:
- name: docker-redis.service
command: start
content: |
[Unit]
Description=Redis container
Author=Me
After=docker.service
[Service]
Restart=always
ExecStart=/usr/bin/docker start -a redis_server
ExecStop=/usr/bin/docker stop -t 2 redis_server
[Install]
WantedBy=local.target
[Service]
Restart=always
ExecStart=/usr/bin/docker start -a redis_server
ExecStop=/usr/bin/docker stop -t 2 redis_server
```
Start the builtin `etcd` and `fleet` services:
Add the DOCKER_OPTS environment variable to docker.service.
```
# cloud-config
```yaml
#cloud-config
coreos:
units:
- name: etcd.service
command: start
- name: fleet.service
command: start
units:
- name: docker.service
drop-ins:
- name: 50-insecure-registry.conf
content: |
[Service]
Environment=DOCKER_OPTS='--insecure-registry="10.0.1.0/24"'
```
## Cloud-Config Parameters
Start the built-in `etcd` and `fleet` services:
```yaml
#cloud-config
coreos:
units:
- name: etcd.service
command: start
- name: fleet.service
command: start
```
### ssh_authorized_keys
Provided public SSH keys will be authorized for the `core` user.
The `ssh_authorized_keys` parameter adds public SSH keys which will be authorized for the `core` user.
The keys will be named "coreos-cloudinit" by default.
Override this with the `--ssh-key-name` flag when calling `coreos-cloudinit`.
Override this by using the `--ssh-key-name` flag when calling `coreos-cloudinit`.
```
```yaml
#cloud-config
ssh_authorized_keys:
@@ -146,10 +231,10 @@ ssh_authorized_keys:
### hostname
The provided value will be used to set the system's hostname.
The `hostname` parameter defines the system's hostname.
This is the local part of a fully-qualified domain name (i.e. `foo` in `foo.example.com`).
```
```yaml
#cloud-config
hostname: coreos1
@@ -157,20 +242,20 @@ hostname: coreos1
### users
Add or modify users with the `users` directive by providing a list of user objects, each consisting of the following fields.
Each field is optional and of type string unless otherwise noted.
The `users` parameter adds or modifies the specified list of users. Each user is an object which consists of the following fields. Each field is optional and of type string unless otherwise noted.
All but the `passwd` and `ssh-authorized-keys` fields will be ignored if the user already exists.
- **name**: Required. Login name of user
- **gecos**: GECOS comment of user
- **passwd**: Hash of the password to use for this user
- **homedir**: User's home directory. Defaults to /home/<name>
- **homedir**: User's home directory. Defaults to /home/\<name\>
- **no-create-home**: Boolean. Skip home directory creation.
- **primary-group**: Default group for the user. Defaults to a new group created named after the user.
- **groups**: Add user to these additional groups
- **no-user-group**: Boolean. Skip default group creation.
- **ssh-authorized-keys**: List of public SSH keys to authorize for this user
- **coreos-ssh-import-github**: Authorize SSH keys from Github user
- **coreos-ssh-import-url**: Authorize SSH keys imported from a url endpoint.
- **system**: Create the user as a system user. No home directory will be created.
- **no-log-init**: Boolean. Skip initialization of lastlog and faillog databases.
@@ -182,17 +267,17 @@ The following fields are not yet implemented:
- **selinux-user**: Corresponding SELinux user
- **ssh-import-id**: Import SSH keys by ID from Launchpad.
```
```yaml
#cloud-config
users:
- name: elroy
passwd: $6$5s2u6/jR$un0AvWnqilcgaNB3Mkxd5yYv6mTlWfOoCYHZmfi3LDKVltj.E8XNKEcwWm...
groups:
- staff
- docker
ssh-authorized-keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h...
passwd: $6$5s2u6/jR$un0AvWnqilcgaNB3Mkxd5yYv6mTlWfOoCYHZmfi3LDKVltj.E8XNKEcwWm...
groups:
- sudo
- docker
ssh-authorized-keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h...
```
#### Generating a password hash
@@ -215,12 +300,80 @@ perl -e 'print crypt("password","\$6\$SALT\$") . "\n"'
Using a higher number of rounds will help create more secure passwords, but given enough time, password hashes can be reversed. On most RPM based distributions there is a tool called mkpasswd available in the `expect` package, but this does not handle "rounds" nor advanced hashing algorithms.
#### Retrieving SSH Authorized Keys
##### From a GitHub User
Using the `coreos-ssh-import-github` field, we can import public SSH keys from a GitHub user to use as authorized keys to a server.
```yaml
#cloud-config
users:
- name: elroy
coreos-ssh-import-github: elroy
```
##### From an HTTP Endpoint
We can also pull public SSH keys from any HTTP endpoint which matches [GitHub's API response format](https://developer.github.com/v3/users/keys/#list-public-keys-for-a-user).
For example, if you have an installation of GitHub Enterprise, you can provide a complete URL with an authentication token:
```yaml
#cloud-config
users:
- name: elroy
coreos-ssh-import-url: https://github-enterprise.example.com/api/v3/users/elroy/keys?access_token=<TOKEN>
```
You can also specify any URL whose response matches the JSON format for public keys:
```yaml
#cloud-config
users:
- name: elroy
coreos-ssh-import-url: https://example.com/public-keys
```
### write_files
Inject an arbitrary set of files to the local filesystem.
Provide a list of objects with the following attributes:
The `write_files` directive defines a set of files to create on the local filesystem.
Each item in the list may have the following keys:
- **path**: Absolute location on disk where contents should be written
- **content**: Data to write at the provided `path`
- **permissions**: String representing file permissions in octal notation (i.e. '0644')
- **permissions**: Integer representing file permissions, typically in octal notation (i.e. 0644)
- **owner**: User and group that should own the file written to disk. This is equivalent to the `<user>:<group>` argument to `chown <user>:<group> <path>`.
Explicitly not implemented is the **encoding** attribute.
The **content** field must represent exactly what should be written to disk.
```yaml
#cloud-config
write_files:
- path: /etc/resolv.conf
permissions: 0644
owner: root
content: |
nameserver 8.8.8.8
- path: /etc/motd
permissions: 0644
owner: root
content: |
Good news, everyone!
```
### manage_etc_hosts
The `manage_etc_hosts` parameter configures the contents of the `/etc/hosts` file, which is used for local name resolution.
Currently, the only supported value is "localhost" which will cause your system's hostname
to resolve to "127.0.0.1". This is helpful when the host does not have DNS
infrastructure in place to resolve its own hostname, for example, when using Vagrant.
```yaml
#cloud-config
manage_etc_hosts: localhost
```

View File

@@ -0,0 +1,34 @@
# Distribution via Config Drive
CoreOS supports providing configuration data via [config drive][config-drive]
disk images. Currently only providing a single script or cloud config file is
supported.
[config-drive]: http://docs.openstack.org/user-guide/content/enable_config_drive.html#config_drive_contents
## Contents and Format
The image should be a single FAT or ISO9660 file system with the label
`config-2` and the configuration data should be located at
`openstack/latest/user_data`.
For example, to wrap up a config named `user_data` in a config drive image:
```sh
mkdir -p /tmp/new-drive/openstack/latest
cp user_data /tmp/new-drive/openstack/latest/user_data
mkisofs -R -V config-2 -o configdrive.iso /tmp/new-drive
rm -r /tmp/new-drive
```
## QEMU virtfs
One exception to the above, when using QEMU it is possible to skip creating an
image and use a plain directory containing the same contents:
```sh
qemu-system-x86_64 \
-fsdev local,id=conf,security_model=none,readonly,path=/tmp/new-drive \
-device virtio-9p-pci,fsdev=conf,mount_tag=config-2 \
[usual qemu options here...]
```

View File

@@ -0,0 +1,27 @@
#Debian Interfaces#
**WARNING**: This option is EXPERIMENTAL and may change or be removed at any
point.
There is basic support for converting from a Debian network configuration to
networkd unit files. The -convert-netconf=debian option is used to activate
this feature.
#convert-netconf#
Default: ""
Read the network config provided in cloud-drive and translate it from the
specified format into networkd unit files (requires the -from-configdrive
flag). Currently only supports "debian" which provides support for a small
subset of the [Debian network configuration]
(https://wiki.debian.org/NetworkConfiguration). These options include:
- interface config methods
- static
- address/netmask
- gateway
- hwaddress
- dns-nameservers
- dhcp
- hwaddress
- manual
- loopback
- vlan_raw_device
- bond-slaves

34
Godeps/Godeps.json generated Normal file
View File

@@ -0,0 +1,34 @@
{
"ImportPath": "github.com/coreos/coreos-cloudinit",
"GoVersion": "go1.3.1",
"Packages": [
"./..."
],
"Deps": [
{
"ImportPath": "github.com/cloudsigma/cepgo",
"Rev": "1bfc4895bf5c4d3b599f3f6ee142299488c8739b"
},
{
"ImportPath": "github.com/coreos/go-systemd/dbus",
"Rev": "4fbc5060a317b142e6c7bfbedb65596d5f0ab99b"
},
{
"ImportPath": "github.com/dotcloud/docker/pkg/netlink",
"Comment": "v0.11.1-359-g55d41c3e21e1",
"Rev": "55d41c3e21e1593b944c06196ffb2ac57ab7f653"
},
{
"ImportPath": "github.com/guelfey/go.dbus",
"Rev": "f6a3a2366cc39b8479cadc499d3c735fb10fbdda"
},
{
"ImportPath": "github.com/tarm/goserial",
"Rev": "cdabc8d44e8e84f58f18074ae44337e1f2f375b9"
},
{
"ImportPath": "gopkg.in/yaml.v1",
"Rev": "feb4ca79644e8e7e39c06095246ee54b1282c118"
}
]
}

5
Godeps/Readme generated Normal file
View File

@@ -0,0 +1,5 @@
This directory tree is generated automatically by godep.
Please do not edit.
See https://github.com/tools/godep for more information.

2
Godeps/_workspace/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,2 @@
/pkg
/bin

View File

@@ -0,0 +1,23 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*.test

View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,43 @@
cepgo
=====
Cepko implements easy-to-use communication with CloudSigma's VMs through a
virtual serial port without bothering with formatting the messages properly nor
parsing the output with the specific and sometimes confusing shell tools for
that purpose.
Having the server definition accessible by the VM can be useful in various
ways. For example it is possible to easily determine from within the VM, which
network interfaces are connected to public and which to private network.
Another use is to pass some data to initial VM setup scripts, like setting the
hostname to the VM name or passing ssh public keys through server meta.
Example usage:
package main
import (
"fmt"
"github.com/cloudsigma/cepgo"
)
func main() {
c := cepgo.NewCepgo()
result, err := c.Meta()
if err != nil {
panic(err)
}
fmt.Printf("%#v", result)
}
Output:
map[string]interface {}{
"optimize_for":"custom",
"ssh_public_key":"ssh-rsa AAA...",
"description":"[...]",
}
For more information take a look at the Server Context section of CloudSigma
API Docs: http://cloudsigma-docs.readthedocs.org/en/latest/server_context.html

View File

@@ -0,0 +1,186 @@
// Cepko implements easy-to-use communication with CloudSigma's VMs through a
// virtual serial port without bothering with formatting the messages properly
// nor parsing the output with the specific and sometimes confusing shell tools
// for that purpose.
//
// Having the server definition accessible by the VM can be useful in various
// ways. For example it is possible to easily determine from within the VM,
// which network interfaces are connected to public and which to private
// network. Another use is to pass some data to initial VM setup scripts, like
// setting the hostname to the VM name or passing ssh public keys through
// server meta.
//
// Example usage:
//
// package main
//
// import (
// "fmt"
//
// "github.com/cloudsigma/cepgo"
// )
//
// func main() {
// c := cepgo.NewCepgo()
// result, err := c.Meta()
// if err != nil {
// panic(err)
// }
// fmt.Printf("%#v", result)
// }
//
// Output:
//
// map[string]string{
// "optimize_for":"custom",
// "ssh_public_key":"ssh-rsa AAA...",
// "description":"[...]",
// }
//
// For more information take a look at the Server Context section API Docs:
// http://cloudsigma-docs.readthedocs.org/en/latest/server_context.html
package cepgo
import (
"bufio"
"encoding/json"
"errors"
"fmt"
"runtime"
"github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/tarm/goserial"
)
const (
requestPattern = "<\n%s\n>"
EOT = '\x04' // End Of Transmission
)
var (
SerialPort string = "/dev/ttyS1"
Baud int = 115200
)
// Sets the serial port. If the operating system is windows CloudSigma's server
// context is at COM2 port, otherwise (linux, freebsd, darwin) the port is
// being left to the default /dev/ttyS1.
func init() {
if runtime.GOOS == "windows" {
SerialPort = "COM2"
}
}
// The default fetcher makes the connection to the serial port,
// writes given query and reads until the EOT symbol.
func fetchViaSerialPort(key string) ([]byte, error) {
config := &serial.Config{Name: SerialPort, Baud: Baud}
connection, err := serial.OpenPort(config)
if err != nil {
return nil, err
}
query := fmt.Sprintf(requestPattern, key)
if _, err := connection.Write([]byte(query)); err != nil {
return nil, err
}
reader := bufio.NewReader(connection)
answer, err := reader.ReadBytes(EOT)
if err != nil {
return nil, err
}
return answer[0 : len(answer)-1], nil
}
// Queries to the serial port can be executed only from instance of this type.
// The result from each of them can be either interface{}, map[string]string or
// a single in case of single value is returned. There is also a public metod
// who directly calls the fetcher and returns raw []byte from the serial port.
type Cepgo struct {
fetcher func(string) ([]byte, error)
}
// Creates a Cepgo instance with the default serial port fetcher.
func NewCepgo() *Cepgo {
cepgo := new(Cepgo)
cepgo.fetcher = fetchViaSerialPort
return cepgo
}
// Creates a Cepgo instance with custom fetcher.
func NewCepgoFetcher(fetcher func(string) ([]byte, error)) *Cepgo {
cepgo := new(Cepgo)
cepgo.fetcher = fetcher
return cepgo
}
// Fetches raw []byte from the serial port using directly the fetcher member.
func (c *Cepgo) FetchRaw(key string) ([]byte, error) {
return c.fetcher(key)
}
// Fetches a single key and tries to unmarshal the result to json and returns
// it. If the unmarshalling fails it's safe to assume the result it's just a
// string and returns it.
func (c *Cepgo) Key(key string) (interface{}, error) {
var result interface{}
fetched, err := c.FetchRaw(key)
if err != nil {
return nil, err
}
err = json.Unmarshal(fetched, &result)
if err != nil {
return string(fetched), nil
}
return result, nil
}
// Fetches all the server context. Equivalent of c.Key("")
func (c *Cepgo) All() (interface{}, error) {
return c.Key("")
}
// Fetches only the object meta field and makes sure to return a proper
// map[string]string
func (c *Cepgo) Meta() (map[string]string, error) {
rawMeta, err := c.Key("/meta/")
if err != nil {
return nil, err
}
return typeAssertToMapOfStrings(rawMeta)
}
// Fetches only the global context and makes sure to return a proper
// map[string]string
func (c *Cepgo) GlobalContext() (map[string]string, error) {
rawContext, err := c.Key("/global_context/")
if err != nil {
return nil, err
}
return typeAssertToMapOfStrings(rawContext)
}
// Just a little helper function that uses type assertions in order to convert
// a interface{} to map[string]string if this is possible.
func typeAssertToMapOfStrings(raw interface{}) (map[string]string, error) {
result := make(map[string]string)
dictionary, ok := raw.(map[string]interface{})
if !ok {
return nil, errors.New("Received bytes are formatted badly")
}
for key, rawValue := range dictionary {
if value, ok := rawValue.(string); ok {
result[key] = value
} else {
return nil, errors.New("Server context metadata is formatted badly")
}
}
return result, nil
}

View File

@@ -0,0 +1,122 @@
package cepgo
import (
"encoding/json"
"testing"
)
func fetchMock(key string) ([]byte, error) {
context := []byte(`{
"context": true,
"cpu": 4000,
"cpu_model": null,
"cpus_instead_of_cores": false,
"enable_numa": false,
"global_context": {
"some_global_key": "some_global_val"
},
"grantees": [],
"hv_relaxed": false,
"hv_tsc": false,
"jobs": [],
"mem": 4294967296,
"meta": {
"base64_fields": "cloudinit-user-data",
"cloudinit-user-data": "I2Nsb3VkLWNvbmZpZwoKaG9zdG5hbWU6IGNvcmVvczE=",
"ssh_public_key": "ssh-rsa AAAAB2NzaC1yc2E.../hQ5D5 john@doe"
},
"name": "coreos",
"nics": [
{
"runtime": {
"interface_type": "public",
"ip_v4": {
"uuid": "31.171.251.74"
},
"ip_v6": null
},
"vlan": null
}
],
"smp": 2,
"status": "running",
"uuid": "20a0059b-041e-4d0c-bcc6-9b2852de48b3"
}`)
if key == "" {
return context, nil
}
var marshalledContext map[string]interface{}
err := json.Unmarshal(context, &marshalledContext)
if err != nil {
return nil, err
}
if key[0] == '/' {
key = key[1:]
}
if key[len(key)-1] == '/' {
key = key[:len(key)-1]
}
return json.Marshal(marshalledContext[key])
}
func TestAll(t *testing.T) {
cepgo := NewCepgoFetcher(fetchMock)
result, err := cepgo.All()
if err != nil {
t.Error(err)
}
for _, key := range []string{"meta", "name", "uuid", "global_context"} {
if _, ok := result.(map[string]interface{})[key]; !ok {
t.Errorf("%s not in all keys", key)
}
}
}
func TestKey(t *testing.T) {
cepgo := NewCepgoFetcher(fetchMock)
result, err := cepgo.Key("uuid")
if err != nil {
t.Error(err)
}
if _, ok := result.(string); !ok {
t.Errorf("%#v\n", result)
t.Error("Fetching the uuid did not return a string")
}
}
func TestMeta(t *testing.T) {
cepgo := NewCepgoFetcher(fetchMock)
meta, err := cepgo.Meta()
if err != nil {
t.Errorf("%#v\n", meta)
t.Error(err)
}
if _, ok := meta["ssh_public_key"]; !ok {
t.Error("ssh_public_key is not in the meta")
}
}
func TestGlobalContext(t *testing.T) {
cepgo := NewCepgoFetcher(fetchMock)
result, err := cepgo.GlobalContext()
if err != nil {
t.Error(err)
}
if _, ok := result["some_global_key"]; !ok {
t.Error("some_global_key is not in the global context")
}
}

View File

@@ -18,10 +18,12 @@ limitations under the License.
package dbus
import (
"os"
"strconv"
"strings"
"sync"
"github.com/coreos/coreos-cloudinit/third_party/github.com/guelfey/go.dbus"
"github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/guelfey/go.dbus"
)
const signalBuffer = 100
@@ -73,7 +75,12 @@ func (c *Conn) initConnection() error {
return err
}
err = c.sysconn.Auth(nil)
// Only use EXTERNAL method, and hardcode the uid (not username)
// to avoid a username lookup (which requires a dynamically linked
// libc)
methods := []dbus.Auth{dbus.AuthExternal(strconv.Itoa(os.Getuid()))}
err = c.sysconn.Auth(methods)
if err != nil {
c.sysconn.Close()
return err

View File

@@ -18,7 +18,7 @@ package dbus
import (
"errors"
"github.com/coreos/coreos-cloudinit/third_party/github.com/guelfey/go.dbus"
"github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/guelfey/go.dbus"
)
func (c *Conn) initJobs() {
@@ -35,6 +35,7 @@ func (c *Conn) jobComplete(signal *dbus.Signal) {
out, ok := c.jobListener.jobs[job]
if ok {
out <- result
delete(c.jobListener.jobs, job)
}
c.jobListener.Unlock()
}
@@ -137,8 +138,8 @@ func (c *Conn) KillUnit(name string, signal int32) {
c.sysobj.Call("org.freedesktop.systemd1.Manager.KillUnit", 0, name, "all", signal).Store()
}
// GetUnitProperties takes the unit name and returns all of its dbus object properties.
func (c *Conn) GetUnitProperties(unit string) (map[string]interface{}, error) {
// getProperties takes the unit name and returns all of its dbus object properties, for the given dbus interface
func (c *Conn) getProperties(unit string, dbusInterface string) (map[string]interface{}, error) {
var err error
var props map[string]dbus.Variant
@@ -148,7 +149,7 @@ func (c *Conn) GetUnitProperties(unit string) (map[string]interface{}, error) {
}
obj := c.sysconn.Object("org.freedesktop.systemd1", path)
err = obj.Call("org.freedesktop.DBus.Properties.GetAll", 0, "org.freedesktop.systemd1.Unit").Store(&props)
err = obj.Call("org.freedesktop.DBus.Properties.GetAll", 0, dbusInterface).Store(&props)
if err != nil {
return nil, err
}
@@ -161,6 +162,55 @@ func (c *Conn) GetUnitProperties(unit string) (map[string]interface{}, error) {
return out, nil
}
// GetUnitProperties takes the unit name and returns all of its dbus object properties.
func (c *Conn) GetUnitProperties(unit string) (map[string]interface{}, error) {
return c.getProperties(unit, "org.freedesktop.systemd1.Unit")
}
func (c *Conn) getProperty(unit string, dbusInterface string, propertyName string) (*Property, error) {
var err error
var prop dbus.Variant
path := ObjectPath("/org/freedesktop/systemd1/unit/" + unit)
if !path.IsValid() {
return nil, errors.New("invalid unit name: " + unit)
}
obj := c.sysconn.Object("org.freedesktop.systemd1", path)
err = obj.Call("org.freedesktop.DBus.Properties.Get", 0, dbusInterface, propertyName).Store(&prop)
if err != nil {
return nil, err
}
return &Property{Name: propertyName, Value: prop}, nil
}
func (c *Conn) GetUnitProperty(unit string, propertyName string) (*Property, error) {
return c.getProperty(unit, "org.freedesktop.systemd1.Unit", propertyName)
}
// GetUnitTypeProperties returns the extra properties for a unit, specific to the unit type.
// Valid values for unitType: Service, Socket, Target, Device, Mount, Automount, Snapshot, Timer, Swap, Path, Slice, Scope
// return "dbus.Error: Unknown interface" if the unitType is not the correct type of the unit
func (c *Conn) GetUnitTypeProperties(unit string, unitType string) (map[string]interface{}, error) {
return c.getProperties(unit, "org.freedesktop.systemd1."+unitType)
}
// SetUnitProperties() may be used to modify certain unit properties at runtime.
// Not all properties may be changed at runtime, but many resource management
// settings (primarily those in systemd.cgroup(5)) may. The changes are applied
// instantly, and stored on disk for future boots, unless runtime is true, in which
// case the settings only apply until the next reboot. name is the name of the unit
// to modify. properties are the settings to set, encoded as an array of property
// name and value pairs.
func (c *Conn) SetUnitProperties(name string, runtime bool, properties ...Property) error {
return c.sysobj.Call("SetUnitProperties", 0, name, runtime, properties).Store()
}
func (c *Conn) GetUnitTypeProperty(unit string, unitType string, propertyName string) (*Property, error) {
return c.getProperty(unit, "org.freedesktop.systemd1."+unitType, propertyName)
}
// ListUnits returns an array with all currently loaded units. Note that
// units may be known by multiple names at the same time, and hence there might
// be more unit names loaded than actual units behind them.
@@ -253,8 +303,52 @@ type EnableUnitFileChange struct {
Destination string // Destination of the symlink
}
// DisableUnitFiles() may be used to disable one or more units in the system (by
// removing symlinks to them from /etc or /run).
//
// It takes a list of unit files to disable (either just file names or full
// absolute paths if the unit files are residing outside the usual unit
// search paths), and one boolean: whether the unit was enabled for runtime
// only (true, /run), or persistently (false, /etc).
//
// This call returns an array with the changes made. The changes list
// consists of structures with three strings: the type of the change (one of
// symlink or unlink), the file name of the symlink and the destination of the
// symlink.
func (c *Conn) DisableUnitFiles(files []string, runtime bool) ([]DisableUnitFileChange, error) {
result := make([][]interface{}, 0)
err := c.sysobj.Call("DisableUnitFiles", 0, files, runtime).Store(&result)
if err != nil {
return nil, err
}
resultInterface := make([]interface{}, len(result))
for i := range result {
resultInterface[i] = result[i]
}
changes := make([]DisableUnitFileChange, len(result))
changesInterface := make([]interface{}, len(changes))
for i := range changes {
changesInterface[i] = &changes[i]
}
err = dbus.Store(resultInterface, changesInterface...)
if err != nil {
return nil, err
}
return changes, nil
}
type DisableUnitFileChange struct {
Type string // Type of the change (one of symlink or unlink)
Filename string // File name of the symlink
Destination string // Destination of the symlink
}
// Reload instructs systemd to scan for and reload unit files. This is
// equivalent to a 'systemctl daemon-reload'.
func (c *Conn) Reload() (string, error) {
return c.runJob("org.freedesktop.systemd1.Manager.Reload")
func (c *Conn) Reload() error {
return c.sysobj.Call("org.freedesktop.systemd1.Manager.Reload", 0).Store()
}

View File

@@ -18,9 +18,11 @@ package dbus
import (
"fmt"
"github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/guelfey/go.dbus"
"math/rand"
"os"
"path/filepath"
"reflect"
"testing"
)
@@ -50,13 +52,16 @@ func setupUnit(target string, conn *Conn, t *testing.T) {
fixture := []string{abs}
install, changes, err := conn.EnableUnitFiles(fixture, true, true)
if err != nil {
t.Fatal(err)
}
if install != false {
t.Fatal("Install was true")
}
if len(changes) < 1 {
t.Fatal("Expected one change, got %v", changes)
t.Fatalf("Expected one change, got %v", changes)
}
if changes[0].Filename != targetRun {
@@ -118,6 +123,37 @@ func TestStartStopUnit(t *testing.T) {
}
}
// Enables a unit and then immediately tears it down
func TestEnableDisableUnit(t *testing.T) {
target := "enable-disable.service"
conn := setupConn(t)
setupUnit(target, conn, t)
abs, err := filepath.Abs("../fixtures/" + target)
if err != nil {
t.Fatal(err)
}
path := filepath.Join("/run/systemd/system/", target)
// 2. Disable the unit
changes, err := conn.DisableUnitFiles([]string{abs}, true)
if err != nil {
t.Fatal(err)
}
if len(changes) != 1 {
t.Fatalf("Changes should include the path, %v", changes)
}
if changes[0].Filename != path {
t.Fatalf("Change should include correct filename, %+v", changes[0])
}
if changes[0].Destination != "" {
t.Fatalf("Change destination should be empty, %+v", changes[0])
}
}
// TestGetUnitProperties reads the `-.mount` which should exist on all systemd
// systems and ensures that one of its properties is valid.
func TestGetUnitProperties(t *testing.T) {
@@ -139,6 +175,20 @@ func TestGetUnitProperties(t *testing.T) {
if names[0] != "system.slice" {
t.Fatal("unexpected wants for /")
}
prop, err := conn.GetUnitProperty(unit, "Wants")
if err != nil {
t.Fatal(err)
}
if prop.Name != "Wants" {
t.Fatal("unexpected property name")
}
val := prop.Value.Value().([]string)
if !reflect.DeepEqual(val, names) {
t.Fatal("unexpected property value")
}
}
// TestGetUnitPropertiesRejectsInvalidName attempts to get the properties for a
@@ -150,10 +200,37 @@ func TestGetUnitPropertiesRejectsInvalidName(t *testing.T) {
unit := "//invalid#$^/"
_, err := conn.GetUnitProperties(unit)
if err == nil {
t.Fatal("Expected an error, got nil")
}
_, err = conn.GetUnitProperty(unit, "Wants")
if err == nil {
t.Fatal("Expected an error, got nil")
}
}
// TestSetUnitProperties changes a cgroup setting on the `tmp.mount`
// which should exist on all systemd systems and ensures that the
// property was set.
func TestSetUnitProperties(t *testing.T) {
conn := setupConn(t)
unit := "tmp.mount"
if err := conn.SetUnitProperties(unit, true, Property{"CPUShares", dbus.MakeVariant(uint64(1023))}); err != nil {
t.Fatal(err)
}
info, err := conn.GetUnitTypeProperties(unit, "Mount")
if err != nil {
t.Fatal(err)
}
value := info["CPUShares"].(uint64)
if value != 1023 {
t.Fatal("CPUShares of unit is not 1023, %s", value)
}
}
// Ensure that basic transient unit starting and stopping works.
@@ -211,3 +288,27 @@ func TestStartStopTransientUnit(t *testing.T) {
t.Fatalf("Test unit found in list, should be stopped")
}
}
func TestConnJobListener(t *testing.T) {
target := "start-stop.service"
conn := setupConn(t)
setupUnit(target, conn, t)
jobSize := len(conn.jobListener.jobs)
_, err := conn.StartUnit(target, "replace")
if err != nil {
t.Fatal(err)
}
_, err = conn.StopUnit(target, "replace")
if err != nil {
t.Fatal(err)
}
currentJobSize := len(conn.jobListener.jobs)
if jobSize != currentJobSize {
t.Fatal("JobListener jobs leaked")
}
}

View File

@@ -17,7 +17,7 @@ limitations under the License.
package dbus
import (
"github.com/coreos/coreos-cloudinit/third_party/github.com/guelfey/go.dbus"
"github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/guelfey/go.dbus"
)
// From the systemd docs:
@@ -209,3 +209,12 @@ func PropPropagatesReloadTo(units ...string) Property {
func PropRequiresMountsFor(units ...string) Property {
return propDependency("RequiresMountsFor", units)
}
// PropSlice sets the Slice unit property. See
// http://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#Slice=
func PropSlice(slice string) Property {
return Property{
Name: "Slice",
Value: dbus.MakeVariant(slice),
}
}

View File

@@ -20,7 +20,7 @@ import (
"errors"
"time"
"github.com/coreos/coreos-cloudinit/third_party/github.com/guelfey/go.dbus"
"github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/guelfey/go.dbus"
)
const (
@@ -101,7 +101,7 @@ func (c *Conn) SubscribeUnits(interval time.Duration) (<-chan map[string]*UnitSt
// SubscribeUnitsCustom is like SubscribeUnits but lets you specify the buffer
// size of the channels, the comparison function for detecting changes and a filter
// function for cutting down on the noise that your channel receives.
func (c *Conn) SubscribeUnitsCustom(interval time.Duration, buffer int, isChanged func(*UnitStatus, *UnitStatus) bool, filterUnit func (string) bool) (<-chan map[string]*UnitStatus, <-chan error) {
func (c *Conn) SubscribeUnitsCustom(interval time.Duration, buffer int, isChanged func(*UnitStatus, *UnitStatus) bool, filterUnit func(string) bool) (<-chan map[string]*UnitStatus, <-chan error) {
old := make(map[string]*UnitStatus)
statusChan := make(chan map[string]*UnitStatus, buffer)
errChan := make(chan error, buffer)

View File

@@ -0,0 +1,2 @@
Michael Crosby <michael@crosbymichael.com> (@crosbymichael)
Guillaume J. Charmes <guillaume@docker.com> (@creack)

View File

@@ -0,0 +1,23 @@
// Packet netlink provide access to low level Netlink sockets and messages.
//
// Actual implementations are in:
// netlink_linux.go
// netlink_darwin.go
package netlink
import (
"errors"
"net"
)
var (
ErrWrongSockType = errors.New("Wrong socket type")
ErrShortResponse = errors.New("Got short response from netlink")
)
// A Route is a subnet associated with the interface to reach it.
type Route struct {
*net.IPNet
Iface *net.Interface
Default bool
}

View File

@@ -0,0 +1,891 @@
// +build amd64
package netlink
import (
"encoding/binary"
"fmt"
"math/rand"
"net"
"syscall"
"unsafe"
)
const (
IFNAMSIZ = 16
DEFAULT_CHANGE = 0xFFFFFFFF
IFLA_INFO_KIND = 1
IFLA_INFO_DATA = 2
VETH_INFO_PEER = 1
IFLA_NET_NS_FD = 28
SIOC_BRADDBR = 0x89a0
SIOC_BRADDIF = 0x89a2
)
var nextSeqNr int
type ifreqHwaddr struct {
IfrnName [16]byte
IfruHwaddr syscall.RawSockaddr
}
type ifreqIndex struct {
IfrnName [16]byte
IfruIndex int32
}
func nativeEndian() binary.ByteOrder {
var x uint32 = 0x01020304
if *(*byte)(unsafe.Pointer(&x)) == 0x01 {
return binary.BigEndian
}
return binary.LittleEndian
}
func getSeq() int {
nextSeqNr = nextSeqNr + 1
return nextSeqNr
}
func getIpFamily(ip net.IP) int {
if len(ip) <= net.IPv4len {
return syscall.AF_INET
}
if ip.To4() != nil {
return syscall.AF_INET
}
return syscall.AF_INET6
}
type NetlinkRequestData interface {
Len() int
ToWireFormat() []byte
}
type IfInfomsg struct {
syscall.IfInfomsg
}
func newIfInfomsg(family int) *IfInfomsg {
return &IfInfomsg{
IfInfomsg: syscall.IfInfomsg{
Family: uint8(family),
},
}
}
func newIfInfomsgChild(parent *RtAttr, family int) *IfInfomsg {
msg := newIfInfomsg(family)
parent.children = append(parent.children, msg)
return msg
}
func (msg *IfInfomsg) ToWireFormat() []byte {
native := nativeEndian()
length := syscall.SizeofIfInfomsg
b := make([]byte, length)
b[0] = msg.Family
b[1] = 0
native.PutUint16(b[2:4], msg.Type)
native.PutUint32(b[4:8], uint32(msg.Index))
native.PutUint32(b[8:12], msg.Flags)
native.PutUint32(b[12:16], msg.Change)
return b
}
func (msg *IfInfomsg) Len() int {
return syscall.SizeofIfInfomsg
}
type IfAddrmsg struct {
syscall.IfAddrmsg
}
func newIfAddrmsg(family int) *IfAddrmsg {
return &IfAddrmsg{
IfAddrmsg: syscall.IfAddrmsg{
Family: uint8(family),
},
}
}
func (msg *IfAddrmsg) ToWireFormat() []byte {
native := nativeEndian()
length := syscall.SizeofIfAddrmsg
b := make([]byte, length)
b[0] = msg.Family
b[1] = msg.Prefixlen
b[2] = msg.Flags
b[3] = msg.Scope
native.PutUint32(b[4:8], msg.Index)
return b
}
func (msg *IfAddrmsg) Len() int {
return syscall.SizeofIfAddrmsg
}
type RtMsg struct {
syscall.RtMsg
}
func newRtMsg(family int) *RtMsg {
return &RtMsg{
RtMsg: syscall.RtMsg{
Family: uint8(family),
Table: syscall.RT_TABLE_MAIN,
Scope: syscall.RT_SCOPE_UNIVERSE,
Protocol: syscall.RTPROT_BOOT,
Type: syscall.RTN_UNICAST,
},
}
}
func (msg *RtMsg) ToWireFormat() []byte {
native := nativeEndian()
length := syscall.SizeofRtMsg
b := make([]byte, length)
b[0] = msg.Family
b[1] = msg.Dst_len
b[2] = msg.Src_len
b[3] = msg.Tos
b[4] = msg.Table
b[5] = msg.Protocol
b[6] = msg.Scope
b[7] = msg.Type
native.PutUint32(b[8:12], msg.Flags)
return b
}
func (msg *RtMsg) Len() int {
return syscall.SizeofRtMsg
}
func rtaAlignOf(attrlen int) int {
return (attrlen + syscall.RTA_ALIGNTO - 1) & ^(syscall.RTA_ALIGNTO - 1)
}
type RtAttr struct {
syscall.RtAttr
Data []byte
children []NetlinkRequestData
}
func newRtAttr(attrType int, data []byte) *RtAttr {
return &RtAttr{
RtAttr: syscall.RtAttr{
Type: uint16(attrType),
},
children: []NetlinkRequestData{},
Data: data,
}
}
func newRtAttrChild(parent *RtAttr, attrType int, data []byte) *RtAttr {
attr := newRtAttr(attrType, data)
parent.children = append(parent.children, attr)
return attr
}
func (a *RtAttr) Len() int {
l := 0
for _, child := range a.children {
l += child.Len() + syscall.SizeofRtAttr
}
if l == 0 {
l++
}
return rtaAlignOf(l + len(a.Data))
}
func (a *RtAttr) ToWireFormat() []byte {
native := nativeEndian()
length := a.Len()
buf := make([]byte, rtaAlignOf(length+syscall.SizeofRtAttr))
if a.Data != nil {
copy(buf[4:], a.Data)
} else {
next := 4
for _, child := range a.children {
childBuf := child.ToWireFormat()
copy(buf[next:], childBuf)
next += rtaAlignOf(len(childBuf))
}
}
if l := uint16(rtaAlignOf(length)); l != 0 {
native.PutUint16(buf[0:2], l+1)
}
native.PutUint16(buf[2:4], a.Type)
return buf
}
type NetlinkRequest struct {
syscall.NlMsghdr
Data []NetlinkRequestData
}
func (rr *NetlinkRequest) ToWireFormat() []byte {
native := nativeEndian()
length := rr.Len
dataBytes := make([][]byte, len(rr.Data))
for i, data := range rr.Data {
dataBytes[i] = data.ToWireFormat()
length += uint32(len(dataBytes[i]))
}
b := make([]byte, length)
native.PutUint32(b[0:4], length)
native.PutUint16(b[4:6], rr.Type)
native.PutUint16(b[6:8], rr.Flags)
native.PutUint32(b[8:12], rr.Seq)
native.PutUint32(b[12:16], rr.Pid)
next := 16
for _, data := range dataBytes {
copy(b[next:], data)
next += len(data)
}
return b
}
func (rr *NetlinkRequest) AddData(data NetlinkRequestData) {
if data != nil {
rr.Data = append(rr.Data, data)
}
}
func newNetlinkRequest(proto, flags int) *NetlinkRequest {
return &NetlinkRequest{
NlMsghdr: syscall.NlMsghdr{
Len: uint32(syscall.NLMSG_HDRLEN),
Type: uint16(proto),
Flags: syscall.NLM_F_REQUEST | uint16(flags),
Seq: uint32(getSeq()),
},
}
}
type NetlinkSocket struct {
fd int
lsa syscall.SockaddrNetlink
}
func getNetlinkSocket() (*NetlinkSocket, error) {
fd, err := syscall.Socket(syscall.AF_NETLINK, syscall.SOCK_RAW, syscall.NETLINK_ROUTE)
if err != nil {
return nil, err
}
s := &NetlinkSocket{
fd: fd,
}
s.lsa.Family = syscall.AF_NETLINK
if err := syscall.Bind(fd, &s.lsa); err != nil {
syscall.Close(fd)
return nil, err
}
return s, nil
}
func (s *NetlinkSocket) Close() {
syscall.Close(s.fd)
}
func (s *NetlinkSocket) Send(request *NetlinkRequest) error {
if err := syscall.Sendto(s.fd, request.ToWireFormat(), 0, &s.lsa); err != nil {
return err
}
return nil
}
func (s *NetlinkSocket) Receive() ([]syscall.NetlinkMessage, error) {
rb := make([]byte, syscall.Getpagesize())
nr, _, err := syscall.Recvfrom(s.fd, rb, 0)
if err != nil {
return nil, err
}
if nr < syscall.NLMSG_HDRLEN {
return nil, ErrShortResponse
}
rb = rb[:nr]
return syscall.ParseNetlinkMessage(rb)
}
func (s *NetlinkSocket) GetPid() (uint32, error) {
lsa, err := syscall.Getsockname(s.fd)
if err != nil {
return 0, err
}
switch v := lsa.(type) {
case *syscall.SockaddrNetlink:
return v.Pid, nil
}
return 0, ErrWrongSockType
}
func (s *NetlinkSocket) HandleAck(seq uint32) error {
native := nativeEndian()
pid, err := s.GetPid()
if err != nil {
return err
}
done:
for {
msgs, err := s.Receive()
if err != nil {
return err
}
for _, m := range msgs {
if m.Header.Seq != seq {
return fmt.Errorf("Wrong Seq nr %d, expected %d", m.Header.Seq, seq)
}
if m.Header.Pid != pid {
return fmt.Errorf("Wrong pid %d, expected %d", m.Header.Pid, pid)
}
if m.Header.Type == syscall.NLMSG_DONE {
break done
}
if m.Header.Type == syscall.NLMSG_ERROR {
error := int32(native.Uint32(m.Data[0:4]))
if error == 0 {
break done
}
return syscall.Errno(-error)
}
}
}
return nil
}
// Add a new default gateway. Identical to:
// ip route add default via $ip
func AddDefaultGw(ip net.IP) error {
s, err := getNetlinkSocket()
if err != nil {
return err
}
defer s.Close()
family := getIpFamily(ip)
wb := newNetlinkRequest(syscall.RTM_NEWROUTE, syscall.NLM_F_CREATE|syscall.NLM_F_EXCL|syscall.NLM_F_ACK)
msg := newRtMsg(family)
wb.AddData(msg)
var ipData []byte
if family == syscall.AF_INET {
ipData = ip.To4()
} else {
ipData = ip.To16()
}
gateway := newRtAttr(syscall.RTA_GATEWAY, ipData)
wb.AddData(gateway)
if err := s.Send(wb); err != nil {
return err
}
return s.HandleAck(wb.Seq)
}
// Bring up a particular network interface
func NetworkLinkUp(iface *net.Interface) error {
s, err := getNetlinkSocket()
if err != nil {
return err
}
defer s.Close()
wb := newNetlinkRequest(syscall.RTM_NEWLINK, syscall.NLM_F_ACK)
msg := newIfInfomsg(syscall.AF_UNSPEC)
msg.Change = syscall.IFF_UP
msg.Flags = syscall.IFF_UP
msg.Index = int32(iface.Index)
wb.AddData(msg)
if err := s.Send(wb); err != nil {
return err
}
return s.HandleAck(wb.Seq)
}
func NetworkLinkDown(iface *net.Interface) error {
s, err := getNetlinkSocket()
if err != nil {
return err
}
defer s.Close()
wb := newNetlinkRequest(syscall.RTM_NEWLINK, syscall.NLM_F_ACK)
msg := newIfInfomsg(syscall.AF_UNSPEC)
msg.Change = syscall.IFF_UP
msg.Flags = 0 & ^syscall.IFF_UP
msg.Index = int32(iface.Index)
wb.AddData(msg)
if err := s.Send(wb); err != nil {
return err
}
return s.HandleAck(wb.Seq)
}
func NetworkSetMTU(iface *net.Interface, mtu int) error {
s, err := getNetlinkSocket()
if err != nil {
return err
}
defer s.Close()
wb := newNetlinkRequest(syscall.RTM_SETLINK, syscall.NLM_F_ACK)
msg := newIfInfomsg(syscall.AF_UNSPEC)
msg.Type = syscall.RTM_SETLINK
msg.Flags = syscall.NLM_F_REQUEST
msg.Index = int32(iface.Index)
msg.Change = DEFAULT_CHANGE
wb.AddData(msg)
var (
b = make([]byte, 4)
native = nativeEndian()
)
native.PutUint32(b, uint32(mtu))
data := newRtAttr(syscall.IFLA_MTU, b)
wb.AddData(data)
if err := s.Send(wb); err != nil {
return err
}
return s.HandleAck(wb.Seq)
}
// same as ip link set $name master $master
func NetworkSetMaster(iface, master *net.Interface) error {
s, err := getNetlinkSocket()
if err != nil {
return err
}
defer s.Close()
wb := newNetlinkRequest(syscall.RTM_SETLINK, syscall.NLM_F_ACK)
msg := newIfInfomsg(syscall.AF_UNSPEC)
msg.Type = syscall.RTM_SETLINK
msg.Flags = syscall.NLM_F_REQUEST
msg.Index = int32(iface.Index)
msg.Change = DEFAULT_CHANGE
wb.AddData(msg)
var (
b = make([]byte, 4)
native = nativeEndian()
)
native.PutUint32(b, uint32(master.Index))
data := newRtAttr(syscall.IFLA_MASTER, b)
wb.AddData(data)
if err := s.Send(wb); err != nil {
return err
}
return s.HandleAck(wb.Seq)
}
func NetworkSetNsPid(iface *net.Interface, nspid int) error {
s, err := getNetlinkSocket()
if err != nil {
return err
}
defer s.Close()
wb := newNetlinkRequest(syscall.RTM_SETLINK, syscall.NLM_F_ACK)
msg := newIfInfomsg(syscall.AF_UNSPEC)
msg.Type = syscall.RTM_SETLINK
msg.Flags = syscall.NLM_F_REQUEST
msg.Index = int32(iface.Index)
msg.Change = DEFAULT_CHANGE
wb.AddData(msg)
var (
b = make([]byte, 4)
native = nativeEndian()
)
native.PutUint32(b, uint32(nspid))
data := newRtAttr(syscall.IFLA_NET_NS_PID, b)
wb.AddData(data)
if err := s.Send(wb); err != nil {
return err
}
return s.HandleAck(wb.Seq)
}
func NetworkSetNsFd(iface *net.Interface, fd int) error {
s, err := getNetlinkSocket()
if err != nil {
return err
}
defer s.Close()
wb := newNetlinkRequest(syscall.RTM_SETLINK, syscall.NLM_F_ACK)
msg := newIfInfomsg(syscall.AF_UNSPEC)
msg.Type = syscall.RTM_SETLINK
msg.Flags = syscall.NLM_F_REQUEST
msg.Index = int32(iface.Index)
msg.Change = DEFAULT_CHANGE
wb.AddData(msg)
var (
b = make([]byte, 4)
native = nativeEndian()
)
native.PutUint32(b, uint32(fd))
data := newRtAttr(IFLA_NET_NS_FD, b)
wb.AddData(data)
if err := s.Send(wb); err != nil {
return err
}
return s.HandleAck(wb.Seq)
}
// Add an Ip address to an interface. This is identical to:
// ip addr add $ip/$ipNet dev $iface
func NetworkLinkAddIp(iface *net.Interface, ip net.IP, ipNet *net.IPNet) error {
s, err := getNetlinkSocket()
if err != nil {
return err
}
defer s.Close()
family := getIpFamily(ip)
wb := newNetlinkRequest(syscall.RTM_NEWADDR, syscall.NLM_F_CREATE|syscall.NLM_F_EXCL|syscall.NLM_F_ACK)
msg := newIfAddrmsg(family)
msg.Index = uint32(iface.Index)
prefixLen, _ := ipNet.Mask.Size()
msg.Prefixlen = uint8(prefixLen)
wb.AddData(msg)
var ipData []byte
if family == syscall.AF_INET {
ipData = ip.To4()
} else {
ipData = ip.To16()
}
localData := newRtAttr(syscall.IFA_LOCAL, ipData)
wb.AddData(localData)
addrData := newRtAttr(syscall.IFA_ADDRESS, ipData)
wb.AddData(addrData)
if err := s.Send(wb); err != nil {
return err
}
return s.HandleAck(wb.Seq)
}
func zeroTerminated(s string) []byte {
return []byte(s + "\000")
}
func nonZeroTerminated(s string) []byte {
return []byte(s)
}
// Add a new network link of a specified type. This is identical to
// running: ip add link $name type $linkType
func NetworkLinkAdd(name string, linkType string) error {
s, err := getNetlinkSocket()
if err != nil {
return err
}
defer s.Close()
wb := newNetlinkRequest(syscall.RTM_NEWLINK, syscall.NLM_F_CREATE|syscall.NLM_F_EXCL|syscall.NLM_F_ACK)
msg := newIfInfomsg(syscall.AF_UNSPEC)
wb.AddData(msg)
if name != "" {
nameData := newRtAttr(syscall.IFLA_IFNAME, zeroTerminated(name))
wb.AddData(nameData)
}
kindData := newRtAttr(IFLA_INFO_KIND, nonZeroTerminated(linkType))
infoData := newRtAttr(syscall.IFLA_LINKINFO, kindData.ToWireFormat())
wb.AddData(infoData)
if err := s.Send(wb); err != nil {
return err
}
return s.HandleAck(wb.Seq)
}
// Returns an array of IPNet for all the currently routed subnets on ipv4
// This is similar to the first column of "ip route" output
func NetworkGetRoutes() ([]Route, error) {
native := nativeEndian()
s, err := getNetlinkSocket()
if err != nil {
return nil, err
}
defer s.Close()
wb := newNetlinkRequest(syscall.RTM_GETROUTE, syscall.NLM_F_DUMP)
msg := newIfInfomsg(syscall.AF_UNSPEC)
wb.AddData(msg)
if err := s.Send(wb); err != nil {
return nil, err
}
pid, err := s.GetPid()
if err != nil {
return nil, err
}
res := make([]Route, 0)
done:
for {
msgs, err := s.Receive()
if err != nil {
return nil, err
}
for _, m := range msgs {
if m.Header.Seq != wb.Seq {
return nil, fmt.Errorf("Wrong Seq nr %d, expected 1", m.Header.Seq)
}
if m.Header.Pid != pid {
return nil, fmt.Errorf("Wrong pid %d, expected %d", m.Header.Pid, pid)
}
if m.Header.Type == syscall.NLMSG_DONE {
break done
}
if m.Header.Type == syscall.NLMSG_ERROR {
error := int32(native.Uint32(m.Data[0:4]))
if error == 0 {
break done
}
return nil, syscall.Errno(-error)
}
if m.Header.Type != syscall.RTM_NEWROUTE {
continue
}
var r Route
msg := (*RtMsg)(unsafe.Pointer(&m.Data[0:syscall.SizeofRtMsg][0]))
if msg.Flags&syscall.RTM_F_CLONED != 0 {
// Ignore cloned routes
continue
}
if msg.Table != syscall.RT_TABLE_MAIN {
// Ignore non-main tables
continue
}
if msg.Family != syscall.AF_INET {
// Ignore non-ipv4 routes
continue
}
if msg.Dst_len == 0 {
// Default routes
r.Default = true
}
attrs, err := syscall.ParseNetlinkRouteAttr(&m)
if err != nil {
return nil, err
}
for _, attr := range attrs {
switch attr.Attr.Type {
case syscall.RTA_DST:
ip := attr.Value
r.IPNet = &net.IPNet{
IP: ip,
Mask: net.CIDRMask(int(msg.Dst_len), 8*len(ip)),
}
case syscall.RTA_OIF:
index := int(native.Uint32(attr.Value[0:4]))
r.Iface, _ = net.InterfaceByIndex(index)
}
}
if r.Default || r.IPNet != nil {
res = append(res, r)
}
}
}
return res, nil
}
func getIfSocket() (fd int, err error) {
for _, socket := range []int{
syscall.AF_INET,
syscall.AF_PACKET,
syscall.AF_INET6,
} {
if fd, err = syscall.Socket(socket, syscall.SOCK_DGRAM, 0); err == nil {
break
}
}
if err == nil {
return fd, nil
}
return -1, err
}
func NetworkChangeName(iface *net.Interface, newName string) error {
fd, err := getIfSocket()
if err != nil {
return err
}
defer syscall.Close(fd)
data := [IFNAMSIZ * 2]byte{}
// the "-1"s here are very important for ensuring we get proper null
// termination of our new C strings
copy(data[:IFNAMSIZ-1], iface.Name)
copy(data[IFNAMSIZ:IFNAMSIZ*2-1], newName)
if _, _, errno := syscall.Syscall(syscall.SYS_IOCTL, uintptr(fd), syscall.SIOCSIFNAME, uintptr(unsafe.Pointer(&data[0]))); errno != 0 {
return errno
}
return nil
}
func NetworkCreateVethPair(name1, name2 string) error {
s, err := getNetlinkSocket()
if err != nil {
return err
}
defer s.Close()
wb := newNetlinkRequest(syscall.RTM_NEWLINK, syscall.NLM_F_CREATE|syscall.NLM_F_EXCL|syscall.NLM_F_ACK)
msg := newIfInfomsg(syscall.AF_UNSPEC)
wb.AddData(msg)
nameData := newRtAttr(syscall.IFLA_IFNAME, zeroTerminated(name1))
wb.AddData(nameData)
nest1 := newRtAttr(syscall.IFLA_LINKINFO, nil)
newRtAttrChild(nest1, IFLA_INFO_KIND, zeroTerminated("veth"))
nest2 := newRtAttrChild(nest1, IFLA_INFO_DATA, nil)
nest3 := newRtAttrChild(nest2, VETH_INFO_PEER, nil)
newIfInfomsgChild(nest3, syscall.AF_UNSPEC)
newRtAttrChild(nest3, syscall.IFLA_IFNAME, zeroTerminated(name2))
wb.AddData(nest1)
if err := s.Send(wb); err != nil {
return err
}
return s.HandleAck(wb.Seq)
}
// Create the actual bridge device. This is more backward-compatible than
// netlink.NetworkLinkAdd and works on RHEL 6.
func CreateBridge(name string, setMacAddr bool) error {
s, err := syscall.Socket(syscall.AF_INET6, syscall.SOCK_STREAM, syscall.IPPROTO_IP)
if err != nil {
// ipv6 issue, creating with ipv4
s, err = syscall.Socket(syscall.AF_INET, syscall.SOCK_STREAM, syscall.IPPROTO_IP)
if err != nil {
return err
}
}
defer syscall.Close(s)
nameBytePtr, err := syscall.BytePtrFromString(name)
if err != nil {
return err
}
if _, _, err := syscall.Syscall(syscall.SYS_IOCTL, uintptr(s), SIOC_BRADDBR, uintptr(unsafe.Pointer(nameBytePtr))); err != 0 {
return err
}
if setMacAddr {
return setBridgeMacAddress(s, name)
}
return nil
}
// Add a slave to abridge device. This is more backward-compatible than
// netlink.NetworkSetMaster and works on RHEL 6.
func AddToBridge(iface, master *net.Interface) error {
s, err := syscall.Socket(syscall.AF_INET6, syscall.SOCK_STREAM, syscall.IPPROTO_IP)
if err != nil {
// ipv6 issue, creating with ipv4
s, err = syscall.Socket(syscall.AF_INET, syscall.SOCK_STREAM, syscall.IPPROTO_IP)
if err != nil {
return err
}
}
defer syscall.Close(s)
ifr := ifreqIndex{}
copy(ifr.IfrnName[:], master.Name)
ifr.IfruIndex = int32(iface.Index)
if _, _, err := syscall.Syscall(syscall.SYS_IOCTL, uintptr(s), SIOC_BRADDIF, uintptr(unsafe.Pointer(&ifr))); err != 0 {
return err
}
return nil
}
func setBridgeMacAddress(s int, name string) error {
ifr := ifreqHwaddr{}
ifr.IfruHwaddr.Family = syscall.ARPHRD_ETHER
copy(ifr.IfrnName[:], name)
for i := 0; i < 6; i++ {
ifr.IfruHwaddr.Data[i] = int8(rand.Intn(255))
}
ifr.IfruHwaddr.Data[0] &^= 0x1 // clear multicast bit
ifr.IfruHwaddr.Data[0] |= 0x2 // set local assignment bit (IEEE802)
if _, _, err := syscall.Syscall(syscall.SYS_IOCTL, uintptr(s), syscall.SIOCSIFHWADDR, uintptr(unsafe.Pointer(&ifr))); err != 0 {
return err
}
return nil
}

View File

@@ -0,0 +1,69 @@
// +build !linux !amd64
package netlink
import (
"errors"
"net"
)
var (
ErrNotImplemented = errors.New("not implemented")
)
func NetworkGetRoutes() ([]Route, error) {
return nil, ErrNotImplemented
}
func NetworkLinkAdd(name string, linkType string) error {
return ErrNotImplemented
}
func NetworkLinkUp(iface *net.Interface) error {
return ErrNotImplemented
}
func NetworkLinkAddIp(iface *net.Interface, ip net.IP, ipNet *net.IPNet) error {
return ErrNotImplemented
}
func AddDefaultGw(ip net.IP) error {
return ErrNotImplemented
}
func NetworkSetMTU(iface *net.Interface, mtu int) error {
return ErrNotImplemented
}
func NetworkCreateVethPair(name1, name2 string) error {
return ErrNotImplemented
}
func NetworkChangeName(iface *net.Interface, newName string) error {
return ErrNotImplemented
}
func NetworkSetNsFd(iface *net.Interface, fd int) error {
return ErrNotImplemented
}
func NetworkSetNsPid(iface *net.Interface, nspid int) error {
return ErrNotImplemented
}
func NetworkSetMaster(iface, master *net.Interface) error {
return ErrNotImplemented
}
func NetworkLinkDown(iface *net.Interface) error {
return ErrNotImplemented
}
func CreateBridge(name string, setMacAddr bool) error {
return ErrNotImplemented
}
func AddToBridge(iface, master *net.Interface) error {
return ErrNotImplemented
}

View File

@@ -2,7 +2,7 @@ package introspect
import (
"encoding/xml"
"github.com/coreos/coreos-cloudinit/third_party/github.com/guelfey/go.dbus"
"github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/guelfey/go.dbus"
"strings"
)

View File

@@ -2,7 +2,7 @@ package introspect
import (
"encoding/xml"
"github.com/coreos/coreos-cloudinit/third_party/github.com/guelfey/go.dbus"
"github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/guelfey/go.dbus"
"reflect"
)

View File

@@ -3,8 +3,8 @@
package prop
import (
"github.com/coreos/coreos-cloudinit/third_party/github.com/guelfey/go.dbus"
"github.com/coreos/coreos-cloudinit/third_party/github.com/guelfey/go.dbus/introspect"
"github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/guelfey/go.dbus"
"github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/guelfey/go.dbus/introspect"
"sync"
)

27
Godeps/_workspace/src/github.com/tarm/goserial/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,27 @@
Copyright (c) 2009 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -0,0 +1,63 @@
GoSerial
========
A simple go package to allow you to read and write from the
serial port as a stream of bytes.
Details
-------
It aims to have the same API on all platforms, including windows. As
an added bonus, the windows package does not use cgo, so you can cross
compile for windows from another platform. Unfortunately goinstall
does not currently let you cross compile so you will have to do it
manually:
GOOS=windows make clean install
Currently there is very little in the way of configurability. You can
set the baud rate. Then you can Read(), Write(), or Close() the
connection. Read() will block until at least one byte is returned.
Write is the same. There is currently no exposed way to set the
timeouts, though patches are welcome.
Currently all ports are opened with 8 data bits, 1 stop bit, no
parity, no hardware flow control, and no software flow control. This
works fine for many real devices and many faux serial devices
including usb-to-serial converters and bluetooth serial ports.
You may Read() and Write() simulantiously on the same connection (from
different goroutines).
Usage
-----
```go
package main
import (
"github.com/tarm/goserial"
"log"
)
func main() {
c := &serial.Config{Name: "COM45", Baud: 115200}
s, err := serial.OpenPort(c)
if err != nil {
log.Fatal(err)
}
n, err := s.Write([]byte("test"))
if err != nil {
log.Fatal(err)
}
buf := make([]byte, 128)
n, err = s.Read(buf)
if err != nil {
log.Fatal(err)
}
log.Print("%q", buf[:n])
}
```
Possible Future Work
--------------------
- better tests (loopback etc)

View File

@@ -0,0 +1,61 @@
package serial
import (
"testing"
"time"
)
func TestConnection(t *testing.T) {
c0 := &Config{Name: "/dev/ttyUSB0", Baud: 115200}
c1 := &Config{Name: "/dev/ttyUSB1", Baud: 115200}
s1, err := OpenPort(c0)
if err != nil {
t.Fatal(err)
}
s2, err := OpenPort(c1)
if err != nil {
t.Fatal(err)
}
ch := make(chan int, 1)
go func() {
buf := make([]byte, 128)
var readCount int
for {
n, err := s2.Read(buf)
if err != nil {
t.Fatal(err)
}
readCount++
t.Logf("Read %v %v bytes: % 02x %s", readCount, n, buf[:n], buf[:n])
select {
case <-ch:
ch <- readCount
close(ch)
default:
}
}
}()
if _, err = s1.Write([]byte("hello")); err != nil {
t.Fatal(err)
}
if _, err = s1.Write([]byte(" ")); err != nil {
t.Fatal(err)
}
time.Sleep(time.Second)
if _, err = s1.Write([]byte("world")); err != nil {
t.Fatal(err)
}
time.Sleep(time.Second / 10)
ch <- 0
s1.Write([]byte(" ")) // We could be blocked in the read without this
c := <-ch
exp := 5
if c >= exp {
t.Fatalf("Expected less than %v read, got %v", exp, c)
}
}

View File

@@ -0,0 +1,99 @@
/*
Goserial is a simple go package to allow you to read and write from
the serial port as a stream of bytes.
It aims to have the same API on all platforms, including windows. As
an added bonus, the windows package does not use cgo, so you can cross
compile for windows from another platform. Unfortunately goinstall
does not currently let you cross compile so you will have to do it
manually:
GOOS=windows make clean install
Currently there is very little in the way of configurability. You can
set the baud rate. Then you can Read(), Write(), or Close() the
connection. Read() will block until at least one byte is returned.
Write is the same. There is currently no exposed way to set the
timeouts, though patches are welcome.
Currently all ports are opened with 8 data bits, 1 stop bit, no
parity, no hardware flow control, and no software flow control. This
works fine for many real devices and many faux serial devices
including usb-to-serial converters and bluetooth serial ports.
You may Read() and Write() simulantiously on the same connection (from
different goroutines).
Example usage:
package main
import (
"github.com/tarm/goserial"
"log"
)
func main() {
c := &serial.Config{Name: "COM5", Baud: 115200}
s, err := serial.OpenPort(c)
if err != nil {
log.Fatal(err)
}
n, err := s.Write([]byte("test"))
if err != nil {
log.Fatal(err)
}
buf := make([]byte, 128)
n, err = s.Read(buf)
if err != nil {
log.Fatal(err)
}
log.Print("%q", buf[:n])
}
*/
package serial
import "io"
// Config contains the information needed to open a serial port.
//
// Currently few options are implemented, but more may be added in the
// future (patches welcome), so it is recommended that you create a
// new config addressing the fields by name rather than by order.
//
// For example:
//
// c0 := &serial.Config{Name: "COM45", Baud: 115200}
// or
// c1 := new(serial.Config)
// c1.Name = "/dev/tty.usbserial"
// c1.Baud = 115200
//
type Config struct {
Name string
Baud int
// Size int // 0 get translated to 8
// Parity SomeNewTypeToGetCorrectDefaultOf_None
// StopBits SomeNewTypeToGetCorrectDefaultOf_1
// RTSFlowControl bool
// DTRFlowControl bool
// XONFlowControl bool
// CRLFTranslate bool
// TimeoutStuff int
}
// OpenPort opens a serial port with the specified configuration
func OpenPort(c *Config) (io.ReadWriteCloser, error) {
return openPort(c.Name, c.Baud)
}
// func Flush()
// func SendBreak()
// func RegisterBreakHandler(func())

View File

@@ -0,0 +1,90 @@
// +build linux,!cgo
package serial
import (
"io"
"os"
"syscall"
"unsafe"
)
func openPort(name string, baud int) (rwc io.ReadWriteCloser, err error) {
var bauds = map[int]uint32{
50: syscall.B50,
75: syscall.B75,
110: syscall.B110,
134: syscall.B134,
150: syscall.B150,
200: syscall.B200,
300: syscall.B300,
600: syscall.B600,
1200: syscall.B1200,
1800: syscall.B1800,
2400: syscall.B2400,
4800: syscall.B4800,
9600: syscall.B9600,
19200: syscall.B19200,
38400: syscall.B38400,
57600: syscall.B57600,
115200: syscall.B115200,
230400: syscall.B230400,
460800: syscall.B460800,
500000: syscall.B500000,
576000: syscall.B576000,
921600: syscall.B921600,
1000000: syscall.B1000000,
1152000: syscall.B1152000,
1500000: syscall.B1500000,
2000000: syscall.B2000000,
2500000: syscall.B2500000,
3000000: syscall.B3000000,
3500000: syscall.B3500000,
4000000: syscall.B4000000,
}
rate := bauds[baud]
if rate == 0 {
return
}
f, err := os.OpenFile(name, syscall.O_RDWR|syscall.O_NOCTTY|syscall.O_NONBLOCK, 0666)
if err != nil {
return nil, err
}
defer func() {
if err != nil && f != nil {
f.Close()
}
}()
fd := f.Fd()
t := syscall.Termios{
Iflag: syscall.IGNPAR,
Cflag: syscall.CS8 | syscall.CREAD | syscall.CLOCAL | rate,
Cc: [32]uint8{syscall.VMIN: 1},
Ispeed: rate,
Ospeed: rate,
}
if _, _, errno := syscall.Syscall6(
syscall.SYS_IOCTL,
uintptr(fd),
uintptr(syscall.TCSETS),
uintptr(unsafe.Pointer(&t)),
0,
0,
0,
); errno != 0 {
return nil, errno
}
if err = syscall.SetNonblock(int(fd), false); err != nil {
return
}
return f, nil
}

View File

@@ -0,0 +1,107 @@
// +build !windows,cgo
package serial
// #include <termios.h>
// #include <unistd.h>
import "C"
// TODO: Maybe change to using syscall package + ioctl instead of cgo
import (
"errors"
"fmt"
"io"
"os"
"syscall"
//"unsafe"
)
func openPort(name string, baud int) (rwc io.ReadWriteCloser, err error) {
f, err := os.OpenFile(name, syscall.O_RDWR|syscall.O_NOCTTY|syscall.O_NONBLOCK, 0666)
if err != nil {
return
}
fd := C.int(f.Fd())
if C.isatty(fd) != 1 {
f.Close()
return nil, errors.New("File is not a tty")
}
var st C.struct_termios
_, err = C.tcgetattr(fd, &st)
if err != nil {
f.Close()
return nil, err
}
var speed C.speed_t
switch baud {
case 115200:
speed = C.B115200
case 57600:
speed = C.B57600
case 38400:
speed = C.B38400
case 19200:
speed = C.B19200
case 9600:
speed = C.B9600
case 4800:
speed = C.B4800
case 2400:
speed = C.B2400
default:
f.Close()
return nil, fmt.Errorf("Unknown baud rate %v", baud)
}
_, err = C.cfsetispeed(&st, speed)
if err != nil {
f.Close()
return nil, err
}
_, err = C.cfsetospeed(&st, speed)
if err != nil {
f.Close()
return nil, err
}
// Select local mode
st.c_cflag |= (C.CLOCAL | C.CREAD)
// Select raw mode
st.c_lflag &= ^C.tcflag_t(C.ICANON | C.ECHO | C.ECHOE | C.ISIG)
st.c_oflag &= ^C.tcflag_t(C.OPOST)
_, err = C.tcsetattr(fd, C.TCSANOW, &st)
if err != nil {
f.Close()
return nil, err
}
//fmt.Println("Tweaking", name)
r1, _, e := syscall.Syscall(syscall.SYS_FCNTL,
uintptr(f.Fd()),
uintptr(syscall.F_SETFL),
uintptr(0))
if e != 0 || r1 != 0 {
s := fmt.Sprint("Clearing NONBLOCK syscall error:", e, r1)
f.Close()
return nil, errors.New(s)
}
/*
r1, _, e = syscall.Syscall(syscall.SYS_IOCTL,
uintptr(f.Fd()),
uintptr(0x80045402), // IOSSIOSPEED
uintptr(unsafe.Pointer(&baud)));
if e != 0 || r1 != 0 {
s := fmt.Sprint("Baudrate syscall error:", e, r1)
f.Close()
return nil, os.NewError(s)
}
*/
return f, nil
}

View File

@@ -0,0 +1,263 @@
// +build windows
package serial
import (
"fmt"
"io"
"os"
"sync"
"syscall"
"unsafe"
)
type serialPort struct {
f *os.File
fd syscall.Handle
rl sync.Mutex
wl sync.Mutex
ro *syscall.Overlapped
wo *syscall.Overlapped
}
type structDCB struct {
DCBlength, BaudRate uint32
flags [4]byte
wReserved, XonLim, XoffLim uint16
ByteSize, Parity, StopBits byte
XonChar, XoffChar, ErrorChar, EofChar, EvtChar byte
wReserved1 uint16
}
type structTimeouts struct {
ReadIntervalTimeout uint32
ReadTotalTimeoutMultiplier uint32
ReadTotalTimeoutConstant uint32
WriteTotalTimeoutMultiplier uint32
WriteTotalTimeoutConstant uint32
}
func openPort(name string, baud int) (rwc io.ReadWriteCloser, err error) {
if len(name) > 0 && name[0] != '\\' {
name = "\\\\.\\" + name
}
h, err := syscall.CreateFile(syscall.StringToUTF16Ptr(name),
syscall.GENERIC_READ|syscall.GENERIC_WRITE,
0,
nil,
syscall.OPEN_EXISTING,
syscall.FILE_ATTRIBUTE_NORMAL|syscall.FILE_FLAG_OVERLAPPED,
0)
if err != nil {
return nil, err
}
f := os.NewFile(uintptr(h), name)
defer func() {
if err != nil {
f.Close()
}
}()
if err = setCommState(h, baud); err != nil {
return
}
if err = setupComm(h, 64, 64); err != nil {
return
}
if err = setCommTimeouts(h); err != nil {
return
}
if err = setCommMask(h); err != nil {
return
}
ro, err := newOverlapped()
if err != nil {
return
}
wo, err := newOverlapped()
if err != nil {
return
}
port := new(serialPort)
port.f = f
port.fd = h
port.ro = ro
port.wo = wo
return port, nil
}
func (p *serialPort) Close() error {
return p.f.Close()
}
func (p *serialPort) Write(buf []byte) (int, error) {
p.wl.Lock()
defer p.wl.Unlock()
if err := resetEvent(p.wo.HEvent); err != nil {
return 0, err
}
var n uint32
err := syscall.WriteFile(p.fd, buf, &n, p.wo)
if err != nil && err != syscall.ERROR_IO_PENDING {
return int(n), err
}
return getOverlappedResult(p.fd, p.wo)
}
func (p *serialPort) Read(buf []byte) (int, error) {
if p == nil || p.f == nil {
return 0, fmt.Errorf("Invalid port on read %v %v", p, p.f)
}
p.rl.Lock()
defer p.rl.Unlock()
if err := resetEvent(p.ro.HEvent); err != nil {
return 0, err
}
var done uint32
err := syscall.ReadFile(p.fd, buf, &done, p.ro)
if err != nil && err != syscall.ERROR_IO_PENDING {
return int(done), err
}
return getOverlappedResult(p.fd, p.ro)
}
var (
nSetCommState,
nSetCommTimeouts,
nSetCommMask,
nSetupComm,
nGetOverlappedResult,
nCreateEvent,
nResetEvent uintptr
)
func init() {
k32, err := syscall.LoadLibrary("kernel32.dll")
if err != nil {
panic("LoadLibrary " + err.Error())
}
defer syscall.FreeLibrary(k32)
nSetCommState = getProcAddr(k32, "SetCommState")
nSetCommTimeouts = getProcAddr(k32, "SetCommTimeouts")
nSetCommMask = getProcAddr(k32, "SetCommMask")
nSetupComm = getProcAddr(k32, "SetupComm")
nGetOverlappedResult = getProcAddr(k32, "GetOverlappedResult")
nCreateEvent = getProcAddr(k32, "CreateEventW")
nResetEvent = getProcAddr(k32, "ResetEvent")
}
func getProcAddr(lib syscall.Handle, name string) uintptr {
addr, err := syscall.GetProcAddress(lib, name)
if err != nil {
panic(name + " " + err.Error())
}
return addr
}
func setCommState(h syscall.Handle, baud int) error {
var params structDCB
params.DCBlength = uint32(unsafe.Sizeof(params))
params.flags[0] = 0x01 // fBinary
params.flags[0] |= 0x10 // Assert DSR
params.BaudRate = uint32(baud)
params.ByteSize = 8
r, _, err := syscall.Syscall(nSetCommState, 2, uintptr(h), uintptr(unsafe.Pointer(&params)), 0)
if r == 0 {
return err
}
return nil
}
func setCommTimeouts(h syscall.Handle) error {
var timeouts structTimeouts
const MAXDWORD = 1<<32 - 1
timeouts.ReadIntervalTimeout = MAXDWORD
timeouts.ReadTotalTimeoutMultiplier = MAXDWORD
timeouts.ReadTotalTimeoutConstant = MAXDWORD - 1
/* From http://msdn.microsoft.com/en-us/library/aa363190(v=VS.85).aspx
For blocking I/O see below:
Remarks:
If an application sets ReadIntervalTimeout and
ReadTotalTimeoutMultiplier to MAXDWORD and sets
ReadTotalTimeoutConstant to a value greater than zero and
less than MAXDWORD, one of the following occurs when the
ReadFile function is called:
If there are any bytes in the input buffer, ReadFile returns
immediately with the bytes in the buffer.
If there are no bytes in the input buffer, ReadFile waits
until a byte arrives and then returns immediately.
If no bytes arrive within the time specified by
ReadTotalTimeoutConstant, ReadFile times out.
*/
r, _, err := syscall.Syscall(nSetCommTimeouts, 2, uintptr(h), uintptr(unsafe.Pointer(&timeouts)), 0)
if r == 0 {
return err
}
return nil
}
func setupComm(h syscall.Handle, in, out int) error {
r, _, err := syscall.Syscall(nSetupComm, 3, uintptr(h), uintptr(in), uintptr(out))
if r == 0 {
return err
}
return nil
}
func setCommMask(h syscall.Handle) error {
const EV_RXCHAR = 0x0001
r, _, err := syscall.Syscall(nSetCommMask, 2, uintptr(h), EV_RXCHAR, 0)
if r == 0 {
return err
}
return nil
}
func resetEvent(h syscall.Handle) error {
r, _, err := syscall.Syscall(nResetEvent, 1, uintptr(h), 0, 0)
if r == 0 {
return err
}
return nil
}
func newOverlapped() (*syscall.Overlapped, error) {
var overlapped syscall.Overlapped
r, _, err := syscall.Syscall6(nCreateEvent, 4, 0, 1, 0, 0, 0, 0)
if r == 0 {
return nil, err
}
overlapped.HEvent = syscall.Handle(r)
return &overlapped, nil
}
func getOverlappedResult(h syscall.Handle, overlapped *syscall.Overlapped) (int, error) {
var n int
r, _, err := syscall.Syscall6(nGetOverlappedResult, 4,
uintptr(h),
uintptr(unsafe.Pointer(overlapped)),
uintptr(unsafe.Pointer(&n)), 1, 0, 0)
if r == 0 {
return n, err
}
return n, nil
}

View File

@@ -1,3 +1,15 @@
The following files were ported to Go from C files of libyaml, and thus
are still covered by their original copyright and license:
apic.go
emitterc.go
parserc.go
readerc.go
scannerc.go
writerc.go
yamlh.go
yamlprivateh.go
Copyright (c) 2006 Kirill Simonov
Permission is hereby granted, free of charge, to any person obtaining a copy of

128
Godeps/_workspace/src/gopkg.in/yaml.v1/README.md generated vendored Normal file
View File

@@ -0,0 +1,128 @@
# YAML support for the Go language
Introduction
------------
The yaml package enables Go programs to comfortably encode and decode YAML
values. It was developed within [Canonical](https://www.canonical.com) as
part of the [juju](https://juju.ubuntu.com) project, and is based on a
pure Go port of the well-known [libyaml](http://pyyaml.org/wiki/LibYAML)
C library to parse and generate YAML data quickly and reliably.
Compatibility
-------------
The yaml package is almost compatible with YAML 1.1, including support for
anchors, tags, etc. There are still a few missing bits, such as document
merging, base-60 floats (huh?), and multi-document unmarshalling. These
features are not hard to add, and will be introduced as necessary.
Installation and usage
----------------------
The import path for the package is *gopkg.in/yaml.v1*.
To install it, run:
go get gopkg.in/yaml.v1
API documentation
-----------------
If opened in a browser, the import path itself leads to the API documentation:
* [https://gopkg.in/yaml.v1](https://gopkg.in/yaml.v1)
API stability
-------------
The package API for yaml v1 will remain stable as described in [gopkg.in](https://gopkg.in).
License
-------
The yaml package is licensed under the LGPL with an exception that allows it to be linked statically. Please see the LICENSE file for details.
Example
-------
```Go
package main
import (
"fmt"
"log"
"gopkg.in/yaml.v1"
)
var data = `
a: Easy!
b:
c: 2
d: [3, 4]
`
type T struct {
A string
B struct{C int; D []int ",flow"}
}
func main() {
t := T{}
err := yaml.Unmarshal([]byte(data), &t)
if err != nil {
log.Fatalf("error: %v", err)
}
fmt.Printf("--- t:\n%v\n\n", t)
d, err := yaml.Marshal(&t)
if err != nil {
log.Fatalf("error: %v", err)
}
fmt.Printf("--- t dump:\n%s\n\n", string(d))
m := make(map[interface{}]interface{})
err = yaml.Unmarshal([]byte(data), &m)
if err != nil {
log.Fatalf("error: %v", err)
}
fmt.Printf("--- m:\n%v\n\n", m)
d, err = yaml.Marshal(&m)
if err != nil {
log.Fatalf("error: %v", err)
}
fmt.Printf("--- m dump:\n%s\n\n", string(d))
}
```
This example will generate the following output:
```
--- t:
{Easy! {2 [3 4]}}
--- t dump:
a: Easy!
b:
c: 2
d: [3, 4]
--- m:
map[a:Easy! b:map[c:2 d:[3 4]]]
--- m dump:
a: Easy!
b:
c: 2
d:
- 3
- 4
```

View File

@@ -1,4 +1,4 @@
package goyaml
package yaml
import (
"io"

View File

@@ -1,8 +1,9 @@
package goyaml
package yaml
import (
"reflect"
"strconv"
"time"
)
const (
@@ -211,6 +212,16 @@ func newDecoder() *decoder {
// returned to call SetYAML() with the value of *out once it's defined.
//
func (d *decoder) setter(tag string, out *reflect.Value, good *bool) (set func()) {
if (*out).Kind() != reflect.Ptr && (*out).CanAddr() {
setter, _ := (*out).Addr().Interface().(Setter)
if setter != nil {
var arg interface{}
*out = reflect.ValueOf(&arg).Elem()
return func() {
*good = setter.SetYAML(tag, arg)
}
}
}
again := true
for again {
again = false
@@ -279,16 +290,19 @@ func (d *decoder) alias(n *node, out reflect.Value) (good bool) {
return good
}
var durationType = reflect.TypeOf(time.Duration(0))
func (d *decoder) scalar(n *node, out reflect.Value) (good bool) {
var tag string
var resolved interface{}
if n.tag == "" && !n.implicit {
tag = "!!str"
resolved = n.value
} else {
tag, resolved = resolve(n.tag, n.value)
if set := d.setter(tag, &out, &good); set != nil {
defer set()
}
}
if set := d.setter(tag, &out, &good); set != nil {
defer set()
}
switch out.Kind() {
case reflect.String:
@@ -320,6 +334,14 @@ func (d *decoder) scalar(n *node, out reflect.Value) (good bool) {
out.SetInt(int64(resolved))
good = true
}
case string:
if out.Type() == durationType {
d, err := time.ParseDuration(resolved)
if err == nil {
out.SetInt(int64(d))
good = true
}
}
}
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
switch resolved := resolved.(type) {
@@ -437,6 +459,10 @@ func (d *decoder) mapping(n *node, out reflect.Value) (good bool) {
}
l := len(n.children)
for i := 0; i < l; i += 2 {
if isMerge(n.children[i]) {
d.merge(n.children[i+1], out)
continue
}
k := reflect.New(kt).Elem()
if d.unmarshal(n.children[i], k) {
e := reflect.New(et).Elem()
@@ -456,7 +482,12 @@ func (d *decoder) mappingStruct(n *node, out reflect.Value) (good bool) {
name := settableValueOf("")
l := len(n.children)
for i := 0; i < l; i += 2 {
if !d.unmarshal(n.children[i], name) {
ni := n.children[i]
if isMerge(ni) {
d.merge(n.children[i+1], out)
continue
}
if !d.unmarshal(ni, name) {
continue
}
if info, ok := sinfo.FieldsMap[name.String()]; ok {
@@ -471,3 +502,37 @@ func (d *decoder) mappingStruct(n *node, out reflect.Value) (good bool) {
}
return true
}
func (d *decoder) merge(n *node, out reflect.Value) {
const wantMap = "map merge requires map or sequence of maps as the value"
switch n.kind {
case mappingNode:
d.unmarshal(n, out)
case aliasNode:
an, ok := d.doc.anchors[n.value]
if ok && an.kind != mappingNode {
panic(wantMap)
}
d.unmarshal(n, out)
case sequenceNode:
// Step backwards as earlier nodes take precedence.
for i := len(n.children)-1; i >= 0; i-- {
ni := n.children[i]
if ni.kind == aliasNode {
an, ok := d.doc.anchors[ni.value]
if ok && an.kind != mappingNode {
panic(wantMap)
}
} else if ni.kind != mappingNode {
panic(wantMap)
}
d.unmarshal(ni, out)
}
default:
panic(wantMap)
}
}
func isMerge(n *node) bool {
return n.kind == scalarNode && n.value == "<<" && (n.implicit == true || n.tag == "!!merge" || n.tag == "tag:yaml.org,2002:merge")
}

View File

@@ -1,10 +1,11 @@
package goyaml_test
package yaml_test
import (
. "launchpad.net/gocheck"
"github.com/coreos/coreos-cloudinit/third_party/launchpad.net/goyaml"
. "gopkg.in/check.v1"
"github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/gopkg.in/yaml.v1"
"math"
"reflect"
"time"
)
var unmarshalIntTest = 123
@@ -350,6 +351,32 @@ var unmarshalTests = []struct {
C inlineB `yaml:",inline"`
}{1, inlineB{2, inlineC{3}}},
},
// bug 1243827
{
"a: -b_c",
map[string]interface{}{"a": "-b_c"},
},
{
"a: +b_c",
map[string]interface{}{"a": "+b_c"},
},
{
"a: 50cent_of_dollar",
map[string]interface{}{"a": "50cent_of_dollar"},
},
// Duration
{
"a: 3s",
map[string]time.Duration{"a": 3 * time.Second},
},
// Issue #24.
{
"a: <foo>",
map[string]string{"a": "<foo>"},
},
}
type inlineB struct {
@@ -377,7 +404,7 @@ func (s *S) TestUnmarshal(c *C) {
pv := reflect.New(pt.Elem())
value = pv.Interface()
}
err := goyaml.Unmarshal([]byte(item.data), value)
err := yaml.Unmarshal([]byte(item.data), value)
c.Assert(err, IsNil, Commentf("Item #%d", i))
if t.Kind() == reflect.String {
c.Assert(*value.(*string), Equals, item.value, Commentf("Item #%d", i))
@@ -389,7 +416,7 @@ func (s *S) TestUnmarshal(c *C) {
func (s *S) TestUnmarshalNaN(c *C) {
value := map[string]interface{}{}
err := goyaml.Unmarshal([]byte("notanum: .NaN"), &value)
err := yaml.Unmarshal([]byte("notanum: .NaN"), &value)
c.Assert(err, IsNil)
c.Assert(math.IsNaN(value["notanum"].(float64)), Equals, true)
}
@@ -408,7 +435,7 @@ var unmarshalErrorTests = []struct {
func (s *S) TestUnmarshalErrors(c *C) {
for _, item := range unmarshalErrorTests {
var value interface{}
err := goyaml.Unmarshal([]byte(item.data), &value)
err := yaml.Unmarshal([]byte(item.data), &value)
c.Assert(err, ErrorMatches, item.error, Commentf("Partial unmarshal: %#v", value))
}
}
@@ -421,6 +448,8 @@ var setterTests = []struct {
{"_: [1,A]", "!!seq", []interface{}{1, "A"}},
{"_: 10", "!!int", 10},
{"_: null", "!!null", nil},
{`_: BAR!`, "!!str", "BAR!"},
{`_: "BAR!"`, "!!str", "BAR!"},
{"_: !!foo 'BAR!'", "!!foo", "BAR!"},
}
@@ -442,17 +471,31 @@ func (o *typeWithSetter) SetYAML(tag string, value interface{}) (ok bool) {
return true
}
type typeWithSetterField struct {
type setterPointerType struct {
Field *typeWithSetter "_"
}
func (s *S) TestUnmarshalWithSetter(c *C) {
type setterValueType struct {
Field typeWithSetter "_"
}
func (s *S) TestUnmarshalWithPointerSetter(c *C) {
for _, item := range setterTests {
obj := &typeWithSetterField{}
err := goyaml.Unmarshal([]byte(item.data), obj)
obj := &setterPointerType{}
err := yaml.Unmarshal([]byte(item.data), obj)
c.Assert(err, IsNil)
c.Assert(obj.Field, NotNil,
Commentf("Pointer not initialized (%#v)", item.value))
c.Assert(obj.Field, NotNil, Commentf("Pointer not initialized (%#v)", item.value))
c.Assert(obj.Field.tag, Equals, item.tag)
c.Assert(obj.Field.value, DeepEquals, item.value)
}
}
func (s *S) TestUnmarshalWithValueSetter(c *C) {
for _, item := range setterTests {
obj := &setterValueType{}
err := yaml.Unmarshal([]byte(item.data), obj)
c.Assert(err, IsNil)
c.Assert(obj.Field, NotNil, Commentf("Pointer not initialized (%#v)", item.value))
c.Assert(obj.Field.tag, Equals, item.tag)
c.Assert(obj.Field.value, DeepEquals, item.value)
}
@@ -460,7 +503,7 @@ func (s *S) TestUnmarshalWithSetter(c *C) {
func (s *S) TestUnmarshalWholeDocumentWithSetter(c *C) {
obj := &typeWithSetter{}
err := goyaml.Unmarshal([]byte(setterTests[0].data), obj)
err := yaml.Unmarshal([]byte(setterTests[0].data), obj)
c.Assert(err, IsNil)
c.Assert(obj.tag, Equals, setterTests[0].tag)
value, ok := obj.value.(map[interface{}]interface{})
@@ -477,8 +520,8 @@ func (s *S) TestUnmarshalWithFalseSetterIgnoresValue(c *C) {
}()
m := map[string]*typeWithSetter{}
data := "{abc: 1, def: 2, ghi: 3, jkl: 4}"
err := goyaml.Unmarshal([]byte(data), m)
data := `{abc: 1, def: 2, ghi: 3, jkl: 4}`
err := yaml.Unmarshal([]byte(data), m)
c.Assert(err, IsNil)
c.Assert(m["abc"], NotNil)
c.Assert(m["def"], IsNil)
@@ -489,6 +532,98 @@ func (s *S) TestUnmarshalWithFalseSetterIgnoresValue(c *C) {
c.Assert(m["ghi"].value, Equals, 3)
}
// From http://yaml.org/type/merge.html
var mergeTests = `
anchors:
- &CENTER { "x": 1, "y": 2 }
- &LEFT { "x": 0, "y": 2 }
- &BIG { "r": 10 }
- &SMALL { "r": 1 }
# All the following maps are equal:
plain:
# Explicit keys
"x": 1
"y": 2
"r": 10
label: center/big
mergeOne:
# Merge one map
<< : *CENTER
"r": 10
label: center/big
mergeMultiple:
# Merge multiple maps
<< : [ *CENTER, *BIG ]
label: center/big
override:
# Override
<< : [ *BIG, *LEFT, *SMALL ]
"x": 1
label: center/big
shortTag:
# Explicit short merge tag
!!merge "<<" : [ *CENTER, *BIG ]
label: center/big
longTag:
# Explicit merge long tag
!<tag:yaml.org,2002:merge> "<<" : [ *CENTER, *BIG ]
label: center/big
inlineMap:
# Inlined map
<< : {"x": 1, "y": 2, "r": 10}
label: center/big
inlineSequenceMap:
# Inlined map in sequence
<< : [ *CENTER, {"r": 10} ]
label: center/big
`
func (s *S) TestMerge(c *C) {
var want = map[interface{}]interface{}{
"x": 1,
"y": 2,
"r": 10,
"label": "center/big",
}
var m map[string]interface{}
err := yaml.Unmarshal([]byte(mergeTests), &m)
c.Assert(err, IsNil)
for name, test := range m {
if name == "anchors" {
continue
}
c.Assert(test, DeepEquals, want, Commentf("test %q failed", name))
}
}
func (s *S) TestMergeStruct(c *C) {
type Data struct {
X, Y, R int
Label string
}
want := Data{1, 2, 10, "center/big"}
var m map[string]Data
err := yaml.Unmarshal([]byte(mergeTests), &m)
c.Assert(err, IsNil)
for name, test := range m {
if name == "anchors" {
continue
}
c.Assert(test, Equals, want, Commentf("test %q failed", name))
}
}
//var data []byte
//func init() {
// var err error
@@ -502,7 +637,7 @@ func (s *S) TestUnmarshalWithFalseSetterIgnoresValue(c *C) {
// var err error
// for i := 0; i < c.N; i++ {
// var v map[string]interface{}
// err = goyaml.Unmarshal(data, &v)
// err = yaml.Unmarshal(data, &v)
// }
// if err != nil {
// panic(err)
@@ -511,9 +646,9 @@ func (s *S) TestUnmarshalWithFalseSetterIgnoresValue(c *C) {
//
//func (s *S) BenchmarkMarshal(c *C) {
// var v map[string]interface{}
// goyaml.Unmarshal(data, &v)
// yaml.Unmarshal(data, &v)
// c.ResetTimer()
// for i := 0; i < c.N; i++ {
// goyaml.Marshal(&v)
// yaml.Marshal(&v)
// }
//}

View File

@@ -1,4 +1,4 @@
package goyaml
package yaml
import (
"bytes"

View File

@@ -1,9 +1,10 @@
package goyaml
package yaml
import (
"reflect"
"sort"
"strconv"
"time"
)
type encoder struct {
@@ -85,7 +86,11 @@ func (e *encoder) marshal(tag string, in reflect.Value) {
case reflect.String:
e.stringv(tag, in)
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
e.intv(tag, in)
if in.Type() == durationType {
e.stringv(tag, reflect.ValueOf(in.Interface().(time.Duration).String()))
} else {
e.intv(tag, in)
}
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
e.uintv(tag, in)
case reflect.Float32, reflect.Float64:

View File

@@ -1,12 +1,13 @@
package goyaml_test
package yaml_test
import (
"fmt"
. "launchpad.net/gocheck"
"github.com/coreos/coreos-cloudinit/third_party/launchpad.net/goyaml"
"github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/gopkg.in/yaml.v1"
. "gopkg.in/check.v1"
"math"
"strconv"
"strings"
"time"
)
var marshalIntTest = 123
@@ -212,11 +213,23 @@ var marshalTests = []struct {
}{1, inlineB{2, inlineC{3}}},
"a: 1\nb: 2\nc: 3\n",
},
// Duration
{
map[string]time.Duration{"a": 3 * time.Second},
"a: 3s\n",
},
// Issue #24.
{
map[string]string{"a": "<foo>"},
"a: <foo>\n",
},
}
func (s *S) TestMarshal(c *C) {
for _, item := range marshalTests {
data, err := goyaml.Marshal(item.value)
data, err := yaml.Marshal(item.value)
c.Assert(err, IsNil)
c.Assert(string(data), Equals, item.data)
}
@@ -237,7 +250,7 @@ var marshalErrorTests = []struct {
func (s *S) TestMarshalErrors(c *C) {
for _, item := range marshalErrorTests {
_, err := goyaml.Marshal(item.value)
_, err := yaml.Marshal(item.value)
c.Assert(err, ErrorMatches, item.error)
}
}
@@ -269,12 +282,12 @@ func (s *S) TestMarshalTypeCache(c *C) {
var err error
func() {
type T struct{ A int }
data, err = goyaml.Marshal(&T{})
data, err = yaml.Marshal(&T{})
c.Assert(err, IsNil)
}()
func() {
type T struct{ B int }
data, err = goyaml.Marshal(&T{})
data, err = yaml.Marshal(&T{})
c.Assert(err, IsNil)
}()
c.Assert(string(data), Equals, "b: 0\n")
@@ -298,7 +311,7 @@ func (s *S) TestMashalWithGetter(c *C) {
obj := &typeWithGetterField{}
obj.Field.tag = item.tag
obj.Field.value = item.value
data, err := goyaml.Marshal(obj)
data, err := yaml.Marshal(obj)
c.Assert(err, IsNil)
c.Assert(string(data), Equals, string(item.data))
}
@@ -308,7 +321,7 @@ func (s *S) TestUnmarshalWholeDocumentWithGetter(c *C) {
obj := &typeWithGetter{}
obj.tag = ""
obj.value = map[string]string{"hello": "world!"}
data, err := goyaml.Marshal(obj)
data, err := yaml.Marshal(obj)
c.Assert(err, IsNil)
c.Assert(string(data), Equals, "hello: world!\n")
}
@@ -356,7 +369,7 @@ func (s *S) TestSortedOutput(c *C) {
for _, k := range order {
m[k] = 1
}
data, err := goyaml.Marshal(m)
data, err := yaml.Marshal(m)
c.Assert(err, IsNil)
out := "\n" + string(data)
last := 0

View File

@@ -1,4 +1,4 @@
package goyaml
package yaml
import (
"bytes"

View File

@@ -1,4 +1,4 @@
package goyaml
package yaml
import (
"io"

View File

@@ -1,4 +1,4 @@
package goyaml
package yaml
import (
"math"
@@ -27,7 +27,6 @@ func init() {
t[int(c)] = 'M' // In map
}
t[int('.')] = '.' // Float (potentially in map)
t[int('<')] = '<' // Merge
var resolveMapList = []struct {
v interface{}
@@ -45,6 +44,7 @@ func init() {
{math.Inf(+1), "!!float", []string{".inf", ".Inf", ".INF"}},
{math.Inf(+1), "!!float", []string{"+.inf", "+.Inf", "+.INF"}},
{math.Inf(-1), "!!float", []string{"-.inf", "-.Inf", "-.INF"}},
{"<<", "!!merge", []string{"<<"}},
}
m := resolveMap
@@ -113,13 +113,8 @@ func resolve(tag string, in string) (rtag string, out interface{}) {
case 'D', 'S':
// Int, float, or timestamp.
for i := 0; i != len(in); i++ {
if in[i] == '_' {
in = strings.Replace(in, "_", "", -1)
break
}
}
intv, err := strconv.ParseInt(in, 0, 64)
plain := strings.Replace(in, "_", "", -1)
intv, err := strconv.ParseInt(plain, 0, 64)
if err == nil {
if intv == int64(int(intv)) {
return "!!int", int(intv)
@@ -127,26 +122,23 @@ func resolve(tag string, in string) (rtag string, out interface{}) {
return "!!int", intv
}
}
floatv, err := strconv.ParseFloat(in, 64)
floatv, err := strconv.ParseFloat(plain, 64)
if err == nil {
return "!!float", floatv
}
if strings.HasPrefix(in, "0b") {
intv, err := strconv.ParseInt(in[2:], 2, 64)
if strings.HasPrefix(plain, "0b") {
intv, err := strconv.ParseInt(plain[2:], 2, 64)
if err == nil {
return "!!int", int(intv)
}
} else if strings.HasPrefix(in, "-0b") {
intv, err := strconv.ParseInt(in[3:], 2, 64)
} else if strings.HasPrefix(plain, "-0b") {
intv, err := strconv.ParseInt(plain[3:], 2, 64)
if err == nil {
return "!!int", -int(intv)
}
}
// XXX Handle timestamps here.
case '<':
// XXX Handle merge (<<) here.
default:
panic("resolveTable item not yet handled: " +
string([]byte{c}) + " (with " + in + ")")

View File

@@ -1,4 +1,4 @@
package goyaml
package yaml
import (
"bytes"

View File

@@ -1,4 +1,4 @@
package goyaml
package yaml
import (
"reflect"

View File

@@ -1,7 +1,7 @@
package goyaml_test
package yaml_test
import (
. "launchpad.net/gocheck"
. "gopkg.in/check.v1"
"testing"
)

View File

@@ -1,4 +1,4 @@
package goyaml
package yaml
// Set the writer error and return false.
func yaml_emitter_set_writer_error(emitter *yaml_emitter_t, problem string) bool {

View File

@@ -1,5 +1,10 @@
// Package goyaml implements YAML support for the Go language.
package goyaml
// Package yaml implements YAML support for the Go language.
//
// Source code and other details for the project are available at GitHub:
//
// https://github.com/go-yaml/yaml
//
package yaml
import (
"errors"
@@ -28,32 +33,31 @@ func handleErr(err *error) {
}
}
// Objects implementing the goyaml.Setter interface will receive the YAML
// tag and value via the SetYAML method during unmarshaling, rather than
// being implicitly assigned by the goyaml machinery. If setting the value
// works, the method should return true. If it returns false, the given
// value will be omitted from maps and slices.
// The Setter interface may be implemented by types to do their own custom
// unmarshalling of YAML values, rather than being implicitly assigned by
// the yaml package machinery. If setting the value works, the method should
// return true. If it returns false, the value is considered unsupported
// and is omitted from maps and slices.
type Setter interface {
SetYAML(tag string, value interface{}) bool
}
// Objects implementing the goyaml.Getter interface will get the GetYAML()
// method called when goyaml is requested to marshal the given value, and
// the result of this method will be marshaled in place of the actual object.
// The Getter interface is implemented by types to do their own custom
// marshalling into a YAML tag and value.
type Getter interface {
GetYAML() (tag string, value interface{})
}
// Unmarshal decodes the first document found within the in byte slice
// and assigns decoded values into the object pointed by out.
// and assigns decoded values into the out value.
//
// Maps, pointers to structs and ints, etc, may all be used as out values.
// If an internal pointer within a struct is not initialized, goyaml
// will initialize it if necessary for unmarshalling the provided data,
// but the struct provided as out must not be a nil pointer.
// Maps and pointers (to a struct, string, int, etc) are accepted as out
// values. If an internal pointer within a struct is not initialized,
// the yaml package will initialize it if necessary for unmarshalling
// the provided data. The out parameter must not be nil.
//
// The type of the decoded values and the type of out will be considered,
// and Unmarshal() will do the best possible job to unmarshal values
// and Unmarshal will do the best possible job to unmarshal values
// appropriately. It is NOT considered an error, though, to skip values
// because they are not available in the decoded YAML, or if they are not
// compatible with the out value. To ensure something was properly
@@ -61,11 +65,11 @@ type Getter interface {
// field (usually the zero value).
//
// Struct fields are only unmarshalled if they are exported (have an
// upper case first letter), and will be unmarshalled using the field
// name lowercased by default. When custom field names are desired, the
// tag value may be used to tweak the name. Everything before the first
// comma in the field tag will be used as the name. The values following
// the comma are used to tweak the marshalling process (see Marshal).
// upper case first letter), and are unmarshalled using the field name
// lowercased as the default key. Custom keys may be defined via the
// "yaml" name in the field tag: the content preceding the first comma
// is used as the key, and the following comma-separated options are
// used to tweak the marshalling process (see Marshal).
// Conflicting names result in a runtime error.
//
// For example:
@@ -75,7 +79,7 @@ type Getter interface {
// B int
// }
// var T t
// goyaml.Unmarshal([]byte("a: 1\nb: 2"), &t)
// yaml.Unmarshal([]byte("a: 1\nb: 2"), &t)
//
// See the documentation of Marshal for the format of tags and a list of
// supported tag options.
@@ -94,14 +98,16 @@ func Unmarshal(in []byte, out interface{}) (err error) {
// Marshal serializes the value provided into a YAML document. The structure
// of the generated document will reflect the structure of the value itself.
// Maps, pointers to structs and ints, etc, may all be used as the in value.
// Maps and pointers (to struct, string, int, etc) are accepted as the in value.
//
// In the case of struct values, only exported fields will be serialized.
// The lowercased field name is used as the key for each exported field,
// but this behavior may be changed using the respective field tag.
// The tag may also contain flags to tweak the marshalling behavior for
// the field. Conflicting names result in a runtime error. The tag format
// accepted is:
// Struct fields are only unmarshalled if they are exported (have an upper case
// first letter), and are unmarshalled using the field name lowercased as the
// default key. Custom keys may be defined via the "yaml" name in the field
// tag: the content preceding the first comma is used as the key, and the
// following comma-separated options are used to tweak the marshalling process.
// Conflicting names result in a runtime error.
//
// The field tag format accepted is:
//
// `(...) yaml:"[<key>][,<flag1>[,<flag2>]]" (...)`
//
@@ -126,8 +132,8 @@ func Unmarshal(in []byte, out interface{}) (err error) {
// F int "a,omitempty"
// B int
// }
// goyaml.Marshal(&T{B: 2}) // Returns "b: 2\n"
// goyaml.Marshal(&T{F: 1}} // Returns "a: 1\nb: 0\n"
// yaml.Marshal(&T{B: 2}) // Returns "b: 2\n"
// yaml.Marshal(&T{F: 1}} // Returns "a: 1\nb: 0\n"
//
func Marshal(in interface{}) (out []byte, err error) {
defer handleErr(&err)
@@ -142,7 +148,7 @@ func Marshal(in interface{}) (out []byte, err error) {
// --------------------------------------------------------------------------
// Maintain a mapping of keys to structure field indexes
// The code in this section was copied from gobson.
// The code in this section was copied from mgo/bson.
// structInfo holds details for the serialization of fields of
// a given struct.

View File

@@ -1,4 +1,4 @@
package goyaml
package yaml
import (
"io"

View File

@@ -1,4 +1,4 @@
package goyaml
package yaml
const (
// The size of the input raw buffer.

202
LICENSE Normal file
View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

3
MAINTAINERS Normal file
View File

@@ -0,0 +1,3 @@
Alex Crawford <alex.crawford@coreos.com> (@crawford)
Jonathan Boulle <jonathan.boulle@coreos.com> (@jonboulle)
Brian Waldon <brian.waldon@coreos.com> (@bcwaldon)

5
NOTICE Normal file
View File

@@ -0,0 +1,5 @@
CoreOS Project
Copyright 2014 CoreOS, Inc
This product includes software developed at CoreOS, Inc.
(http://www.coreos.com/).

View File

@@ -1,4 +1,4 @@
# coreos-cloudinit
# coreos-cloudinit [![Build Status](https://travis-ci.org/coreos/coreos-cloudinit.png?branch=master)](https://travis-ci.org/coreos/coreos-cloudinit)
coreos-cloudinit enables a user to customize CoreOS machines by providing either a cloud-config document or an executable script through user-data.

14
build
View File

@@ -1,6 +1,14 @@
#!/bin/bash -e
export GOBIN=${PWD}/bin
export GOPATH=${PWD}
ORG_PATH="github.com/coreos"
REPO_PATH="${ORG_PATH}/coreos-cloudinit"
go build -o bin/coreos-cloudinit github.com/coreos/coreos-cloudinit
if [ ! -h gopath/src/${REPO_PATH} ]; then
mkdir -p gopath/src/${ORG_PATH}
ln -s ../../../.. gopath/src/${REPO_PATH} || exit 255
fi
export GOBIN=${PWD}/bin
export GOPATH=${PWD}/gopath
go build -o bin/coreos-cloudinit ${REPO_PATH}

186
config/config.go Normal file
View File

@@ -0,0 +1,186 @@
/*
Copyright 2014 CoreOS, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config
import (
"fmt"
"reflect"
"strings"
"github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/gopkg.in/yaml.v1"
)
// CloudConfig encapsulates the entire cloud-config configuration file and maps
// directly to YAML. Fields that cannot be set in the cloud-config (fields
// used for internal use) have the YAML tag '-' so that they aren't marshalled.
type CloudConfig struct {
SSHAuthorizedKeys []string `yaml:"ssh_authorized_keys"`
Coreos struct {
Etcd Etcd `yaml:"etcd"`
Flannel Flannel `yaml:"flannel"`
Fleet Fleet `yaml:"fleet"`
OEM OEM `yaml:"oem"`
Update Update `yaml:"update"`
Units []Unit `yaml:"units"`
} `yaml:"coreos"`
WriteFiles []File `yaml:"write_files"`
Hostname string `yaml:"hostname"`
Users []User `yaml:"users"`
ManageEtcHosts EtcHosts `yaml:"manage_etc_hosts"`
NetworkConfigPath string `yaml:"-"`
NetworkConfig string `yaml:"-"`
}
func IsCloudConfig(userdata string) bool {
header := strings.SplitN(userdata, "\n", 2)[0]
// Explicitly trim the header so we can handle user-data from
// non-unix operating systems. The rest of the file is parsed
// by yaml, which correctly handles CRLF.
header = strings.TrimSuffix(header, "\r")
return (header == "#cloud-config")
}
// NewCloudConfig instantiates a new CloudConfig from the given contents (a
// string of YAML), returning any error encountered. It will ignore unknown
// fields but log encountering them.
func NewCloudConfig(contents string) (*CloudConfig, error) {
var cfg CloudConfig
ncontents, err := normalizeConfig(contents)
if err != nil {
return &cfg, err
}
if err = yaml.Unmarshal(ncontents, &cfg); err != nil {
return &cfg, err
}
return &cfg, nil
}
func (cc CloudConfig) String() string {
bytes, err := yaml.Marshal(cc)
if err != nil {
return ""
}
stringified := string(bytes)
stringified = fmt.Sprintf("#cloud-config\n%s", stringified)
return stringified
}
// IsZero returns whether or not the parameter is the zero value for its type.
// If the parameter is a struct, only the exported fields are considered.
func IsZero(c interface{}) bool {
return isZero(reflect.ValueOf(c))
}
type ErrorValid struct {
Value string
Valid []string
Field string
}
func (e ErrorValid) Error() string {
return fmt.Sprintf("invalid value %q for option %q (valid options: %q)", e.Value, e.Field, e.Valid)
}
// AssertStructValid checks the fields in the structure and makes sure that
// they contain valid values as specified by the 'valid' flag. Empty fields are
// implicitly valid.
func AssertStructValid(c interface{}) error {
ct := reflect.TypeOf(c)
cv := reflect.ValueOf(c)
for i := 0; i < ct.NumField(); i++ {
ft := ct.Field(i)
if !isFieldExported(ft) {
continue
}
if err := AssertValid(cv.Field(i), ft.Tag.Get("valid")); err != nil {
err.Field = ft.Name
return err
}
}
return nil
}
// AssertValid checks to make sure that the given value is in the list of
// valid values. Zero values are implicitly valid.
func AssertValid(value reflect.Value, valid string) *ErrorValid {
if valid == "" || isZero(value) {
return nil
}
vs := fmt.Sprintf("%v", value.Interface())
valids := strings.Split(valid, ",")
for _, valid := range valids {
if vs == valid {
return nil
}
}
return &ErrorValid{
Value: vs,
Valid: valids,
}
}
func isZero(v reflect.Value) bool {
switch v.Kind() {
case reflect.Struct:
vt := v.Type()
for i := 0; i < v.NumField(); i++ {
if isFieldExported(vt.Field(i)) && !isZero(v.Field(i)) {
return false
}
}
return true
default:
return v.Interface() == reflect.Zero(v.Type()).Interface()
}
}
func isFieldExported(f reflect.StructField) bool {
return f.PkgPath == ""
}
func normalizeConfig(config string) ([]byte, error) {
var cfg map[interface{}]interface{}
if err := yaml.Unmarshal([]byte(config), &cfg); err != nil {
return nil, err
}
return yaml.Marshal(normalizeKeys(cfg))
}
func normalizeKeys(m map[interface{}]interface{}) map[interface{}]interface{} {
for k, v := range m {
if m, ok := m[k].(map[interface{}]interface{}); ok {
normalizeKeys(m)
}
if s, ok := m[k].([]interface{}); ok {
for _, e := range s {
if m, ok := e.(map[interface{}]interface{}); ok {
normalizeKeys(m)
}
}
}
delete(m, k)
m[strings.Replace(fmt.Sprint(k), "-", "_", -1)] = v
}
return m
}

503
config/config_test.go Normal file
View File

@@ -0,0 +1,503 @@
/*
Copyright 2014 CoreOS, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config
import (
"reflect"
"strings"
"testing"
)
func TestIsZero(t *testing.T) {
for _, tt := range []struct {
c interface{}
empty bool
}{
{struct{}{}, true},
{struct{ a, b string }{}, true},
{struct{ A, b string }{}, true},
{struct{ A, B string }{}, true},
{struct{ A string }{A: "hello"}, false},
{struct{ A int }{}, true},
{struct{ A int }{A: 1}, false},
} {
if empty := IsZero(tt.c); tt.empty != empty {
t.Errorf("bad result (%q): want %t, got %t", tt.c, tt.empty, empty)
}
}
}
func TestAssertStructValid(t *testing.T) {
for _, tt := range []struct {
c interface{}
err error
}{
{struct{}{}, nil},
{struct {
A, b string `valid:"1,2"`
}{}, nil},
{struct {
A, b string `valid:"1,2"`
}{A: "1", b: "2"}, nil},
{struct {
A, b string `valid:"1,2"`
}{A: "1", b: "hello"}, nil},
{struct {
A, b string `valid:"1,2"`
}{A: "hello", b: "2"}, &ErrorValid{Value: "hello", Field: "A", Valid: []string{"1", "2"}}},
{struct {
A, b int `valid:"1,2"`
}{}, nil},
{struct {
A, b int `valid:"1,2"`
}{A: 1, b: 2}, nil},
{struct {
A, b int `valid:"1,2"`
}{A: 1, b: 9}, nil},
{struct {
A, b int `valid:"1,2"`
}{A: 9, b: 2}, &ErrorValid{Value: "9", Field: "A", Valid: []string{"1", "2"}}},
} {
if err := AssertStructValid(tt.c); !reflect.DeepEqual(tt.err, err) {
t.Errorf("bad result (%q): want %q, got %q", tt.c, tt.err, err)
}
}
}
func TestCloudConfigInvalidKeys(t *testing.T) {
defer func() {
if r := recover(); r != nil {
t.Fatalf("panic while instantiating CloudConfig with nil keys: %v", r)
}
}()
for _, tt := range []struct {
contents string
}{
{"coreos:"},
{"ssh_authorized_keys:"},
{"ssh_authorized_keys:\n -"},
{"ssh_authorized_keys:\n - 0:"},
{"write_files:"},
{"write_files:\n -"},
{"write_files:\n - 0:"},
{"users:"},
{"users:\n -"},
{"users:\n - 0:"},
} {
_, err := NewCloudConfig(tt.contents)
if err != nil {
t.Fatalf("error instantiating CloudConfig with invalid keys: %v", err)
}
}
}
func TestCloudConfigUnknownKeys(t *testing.T) {
contents := `
coreos:
etcd:
discovery: "https://discovery.etcd.io/827c73219eeb2fa5530027c37bf18877"
coreos_unknown:
foo: "bar"
section_unknown:
dunno:
something
bare_unknown:
bar
write_files:
- content: fun
path: /var/party
file_unknown: nofun
users:
- name: fry
passwd: somehash
user_unknown: philip
hostname:
foo
`
cfg, err := NewCloudConfig(contents)
if err != nil {
t.Fatalf("error instantiating CloudConfig with unknown keys: %v", err)
}
if cfg.Hostname != "foo" {
t.Fatalf("hostname not correctly set when invalid keys are present")
}
if cfg.Coreos.Etcd.Discovery != "https://discovery.etcd.io/827c73219eeb2fa5530027c37bf18877" {
t.Fatalf("etcd section not correctly set when invalid keys are present")
}
if len(cfg.WriteFiles) < 1 || cfg.WriteFiles[0].Content != "fun" || cfg.WriteFiles[0].Path != "/var/party" {
t.Fatalf("write_files section not correctly set when invalid keys are present")
}
if len(cfg.Users) < 1 || cfg.Users[0].Name != "fry" || cfg.Users[0].PasswordHash != "somehash" {
t.Fatalf("users section not correctly set when invalid keys are present")
}
}
// Assert that the parsing of a cloud config file "generally works"
func TestCloudConfigEmpty(t *testing.T) {
cfg, err := NewCloudConfig("")
if err != nil {
t.Fatalf("Encountered unexpected error :%v", err)
}
keys := cfg.SSHAuthorizedKeys
if len(keys) != 0 {
t.Error("Parsed incorrect number of SSH keys")
}
if len(cfg.WriteFiles) != 0 {
t.Error("Expected zero WriteFiles")
}
if cfg.Hostname != "" {
t.Errorf("Expected hostname to be empty, got '%s'", cfg.Hostname)
}
}
// Assert that the parsing of a cloud config file "generally works"
func TestCloudConfig(t *testing.T) {
contents := `
coreos:
etcd:
discovery: "https://discovery.etcd.io/827c73219eeb2fa5530027c37bf18877"
update:
reboot_strategy: reboot
units:
- name: 50-eth0.network
runtime: yes
content: '[Match]
Name=eth47
[Network]
Address=10.209.171.177/19
'
oem:
id: rackspace
name: Rackspace Cloud Servers
version_id: 168.0.0
home_url: https://www.rackspace.com/cloud/servers/
bug_report_url: https://github.com/coreos/coreos-overlay
ssh_authorized_keys:
- foobar
- foobaz
write_files:
- content: |
penny
elroy
path: /etc/dogepack.conf
permissions: '0644'
owner: root:dogepack
hostname: trontastic
`
cfg, err := NewCloudConfig(contents)
if err != nil {
t.Fatalf("Encountered unexpected error :%v", err)
}
keys := cfg.SSHAuthorizedKeys
if len(keys) != 2 {
t.Error("Parsed incorrect number of SSH keys")
} else if keys[0] != "foobar" {
t.Error("Expected first SSH key to be 'foobar'")
} else if keys[1] != "foobaz" {
t.Error("Expected first SSH key to be 'foobaz'")
}
if len(cfg.WriteFiles) != 1 {
t.Error("Failed to parse correct number of write_files")
} else {
wf := cfg.WriteFiles[0]
if wf.Content != "penny\nelroy\n" {
t.Errorf("WriteFile has incorrect contents '%s'", wf.Content)
}
if wf.Encoding != "" {
t.Errorf("WriteFile has incorrect encoding %s", wf.Encoding)
}
if wf.RawFilePermissions != "0644" {
t.Errorf("WriteFile has incorrect permissions %s", wf.RawFilePermissions)
}
if wf.Path != "/etc/dogepack.conf" {
t.Errorf("WriteFile has incorrect path %s", wf.Path)
}
if wf.Owner != "root:dogepack" {
t.Errorf("WriteFile has incorrect owner %s", wf.Owner)
}
}
if len(cfg.Coreos.Units) != 1 {
t.Error("Failed to parse correct number of units")
} else {
u := cfg.Coreos.Units[0]
expect := `[Match]
Name=eth47
[Network]
Address=10.209.171.177/19
`
if u.Content != expect {
t.Errorf("Unit has incorrect contents '%s'.\nExpected '%s'.", u.Content, expect)
}
if u.Runtime != true {
t.Errorf("Unit has incorrect runtime value")
}
if u.Name != "50-eth0.network" {
t.Errorf("Unit has incorrect name %s", u.Name)
}
}
if cfg.Coreos.OEM.ID != "rackspace" {
t.Errorf("Failed parsing coreos.oem. Expected ID 'rackspace', got %q.", cfg.Coreos.OEM.ID)
}
if cfg.Hostname != "trontastic" {
t.Errorf("Failed to parse hostname")
}
if cfg.Coreos.Update.RebootStrategy != "reboot" {
t.Errorf("Failed to parse locksmith strategy")
}
contents = `
coreos:
write_files:
- path: /home/me/notes
permissions: 0744
`
cfg, err = NewCloudConfig(contents)
if err != nil {
t.Fatalf("Encountered unexpected error :%v", err)
}
if len(cfg.WriteFiles) != 1 {
t.Error("Failed to parse correct number of write_files")
} else {
wf := cfg.WriteFiles[0]
if wf.Content != "" {
t.Errorf("WriteFile has incorrect contents '%s'", wf.Content)
}
if wf.Encoding != "" {
t.Errorf("WriteFile has incorrect encoding %s", wf.Encoding)
}
// Verify that the normalization of the config converted 0744 to its decimal
// representation, 484.
if wf.RawFilePermissions != "484" {
t.Errorf("WriteFile has incorrect permissions %s", wf.RawFilePermissions)
}
if wf.Path != "/home/me/notes" {
t.Errorf("WriteFile has incorrect path %s", wf.Path)
}
if wf.Owner != "" {
t.Errorf("WriteFile has incorrect owner %s", wf.Owner)
}
}
}
// Assert that our interface conversion doesn't panic
func TestCloudConfigKeysNotList(t *testing.T) {
contents := `
ssh_authorized_keys:
- foo: bar
`
cfg, err := NewCloudConfig(contents)
if err != nil {
t.Fatalf("Encountered unexpected error: %v", err)
}
keys := cfg.SSHAuthorizedKeys
if len(keys) != 0 {
t.Error("Parsed incorrect number of SSH keys")
}
}
func TestCloudConfigSerializationHeader(t *testing.T) {
cfg, _ := NewCloudConfig("")
contents := cfg.String()
header := strings.SplitN(contents, "\n", 2)[0]
if header != "#cloud-config" {
t.Fatalf("Serialized config did not have expected header")
}
}
func TestCloudConfigUsers(t *testing.T) {
contents := `
users:
- name: elroy
passwd: somehash
ssh_authorized_keys:
- somekey
gecos: arbitrary comment
homedir: /home/place
no_create_home: yes
primary_group: things
groups:
- ping
- pong
no_user_group: true
system: y
no_log_init: True
`
cfg, err := NewCloudConfig(contents)
if err != nil {
t.Fatalf("Encountered unexpected error: %v", err)
}
if len(cfg.Users) != 1 {
t.Fatalf("Parsed %d users, expected 1", len(cfg.Users))
}
user := cfg.Users[0]
if user.Name != "elroy" {
t.Errorf("User name is %q, expected 'elroy'", user.Name)
}
if user.PasswordHash != "somehash" {
t.Errorf("User passwd is %q, expected 'somehash'", user.PasswordHash)
}
if keys := user.SSHAuthorizedKeys; len(keys) != 1 {
t.Errorf("Parsed %d ssh keys, expected 1", len(keys))
} else {
key := user.SSHAuthorizedKeys[0]
if key != "somekey" {
t.Errorf("User SSH key is %q, expected 'somekey'", key)
}
}
if user.GECOS != "arbitrary comment" {
t.Errorf("Failed to parse gecos field, got %q", user.GECOS)
}
if user.Homedir != "/home/place" {
t.Errorf("Failed to parse homedir field, got %q", user.Homedir)
}
if !user.NoCreateHome {
t.Errorf("Failed to parse no_create_home field")
}
if user.PrimaryGroup != "things" {
t.Errorf("Failed to parse primary_group field, got %q", user.PrimaryGroup)
}
if len(user.Groups) != 2 {
t.Errorf("Failed to parse 2 goups, got %d", len(user.Groups))
} else {
if user.Groups[0] != "ping" {
t.Errorf("First group was %q, not expected value 'ping'", user.Groups[0])
}
if user.Groups[1] != "pong" {
t.Errorf("First group was %q, not expected value 'pong'", user.Groups[1])
}
}
if !user.NoUserGroup {
t.Errorf("Failed to parse no_user_group field")
}
if !user.System {
t.Errorf("Failed to parse system field")
}
if !user.NoLogInit {
t.Errorf("Failed to parse no_log_init field")
}
}
func TestCloudConfigUsersGithubUser(t *testing.T) {
contents := `
users:
- name: elroy
coreos_ssh_import_github: bcwaldon
`
cfg, err := NewCloudConfig(contents)
if err != nil {
t.Fatalf("Encountered unexpected error: %v", err)
}
if len(cfg.Users) != 1 {
t.Fatalf("Parsed %d users, expected 1", len(cfg.Users))
}
user := cfg.Users[0]
if user.Name != "elroy" {
t.Errorf("User name is %q, expected 'elroy'", user.Name)
}
if user.SSHImportGithubUser != "bcwaldon" {
t.Errorf("github user is %q, expected 'bcwaldon'", user.SSHImportGithubUser)
}
}
func TestCloudConfigUsersSSHImportURL(t *testing.T) {
contents := `
users:
- name: elroy
coreos_ssh_import_url: https://token:x-auth-token@github.enterprise.com/api/v3/polvi/keys
`
cfg, err := NewCloudConfig(contents)
if err != nil {
t.Fatalf("Encountered unexpected error: %v", err)
}
if len(cfg.Users) != 1 {
t.Fatalf("Parsed %d users, expected 1", len(cfg.Users))
}
user := cfg.Users[0]
if user.Name != "elroy" {
t.Errorf("User name is %q, expected 'elroy'", user.Name)
}
if user.SSHImportURL != "https://token:x-auth-token@github.enterprise.com/api/v3/polvi/keys" {
t.Errorf("ssh import url is %q, expected 'https://token:x-auth-token@github.enterprise.com/api/v3/polvi/keys'", user.SSHImportURL)
}
}
func TestNormalizeKeys(t *testing.T) {
for _, tt := range []struct {
in string
out string
}{
{"my_key_name: the-value\n", "my_key_name: the-value\n"},
{"my-key_name: the-value\n", "my_key_name: the-value\n"},
{"my-key-name: the-value\n", "my_key_name: the-value\n"},
{"a:\n- key_name: the-value\n", "a:\n- key_name: the-value\n"},
{"a:\n- key-name: the-value\n", "a:\n- key_name: the-value\n"},
{"a:\n b:\n - key_name: the-value\n", "a:\n b:\n - key_name: the-value\n"},
{"a:\n b:\n - key-name: the-value\n", "a:\n b:\n - key_name: the-value\n"},
{"coreos:\n update:\n reboot-strategy: off\n", "coreos:\n update:\n reboot_strategy: false\n"},
{"coreos:\n update:\n reboot-strategy: 'off'\n", "coreos:\n update:\n reboot_strategy: \"off\"\n"},
} {
out, err := normalizeConfig(tt.in)
if err != nil {
t.Fatalf("bad error (%q): want nil, got %s", tt.in, err)
}
if string(out) != tt.out {
t.Fatalf("bad normalization (%q): want %q, got %q", tt.in, tt.out, out)
}
}
}

19
config/etc_hosts.go Normal file
View File

@@ -0,0 +1,19 @@
/*
Copyright 2014 CoreOS, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config
type EtcHosts string

53
config/etcd.go Normal file
View File

@@ -0,0 +1,53 @@
/*
Copyright 2014 CoreOS, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config
type Etcd struct {
Addr string `yaml:"addr" env:"ETCD_ADDR"`
BindAddr string `yaml:"bind_addr" env:"ETCD_BIND_ADDR"`
CAFile string `yaml:"ca_file" env:"ETCD_CA_FILE"`
CertFile string `yaml:"cert_file" env:"ETCD_CERT_FILE"`
ClusterActiveSize int `yaml:"cluster_active_size" env:"ETCD_CLUSTER_ACTIVE_SIZE"`
ClusterRemoveDelay float64 `yaml:"cluster_remove_delay" env:"ETCD_CLUSTER_REMOVE_DELAY"`
ClusterSyncInterval float64 `yaml:"cluster_sync_interval" env:"ETCD_CLUSTER_SYNC_INTERVAL"`
CorsOrigins string `yaml:"cors" env:"ETCD_CORS"`
DataDir string `yaml:"data_dir" env:"ETCD_DATA_DIR"`
Discovery string `yaml:"discovery" env:"ETCD_DISCOVERY"`
GraphiteHost string `yaml:"graphite_host" env:"ETCD_GRAPHITE_HOST"`
HTTPReadTimeout float64 `yaml:"http_read_timeout" env:"ETCD_HTTP_READ_TIMEOUT"`
HTTPWriteTimeout float64 `yaml:"http_write_timeout" env:"ETCD_HTTP_WRITE_TIMEOUT"`
KeyFile string `yaml:"key_file" env:"ETCD_KEY_FILE"`
MaxResultBuffer int `yaml:"max_result_buffer" env:"ETCD_MAX_RESULT_BUFFER"`
MaxRetryAttempts int `yaml:"max_retry_attempts" env:"ETCD_MAX_RETRY_ATTEMPTS"`
Name string `yaml:"name" env:"ETCD_NAME"`
PeerAddr string `yaml:"peer_addr" env:"ETCD_PEER_ADDR"`
PeerBindAddr string `yaml:"peer_bind_addr" env:"ETCD_PEER_BIND_ADDR"`
PeerCAFile string `yaml:"peer_ca_file" env:"ETCD_PEER_CA_FILE"`
PeerCertFile string `yaml:"peer_cert_file" env:"ETCD_PEER_CERT_FILE"`
PeerElectionTimeout int `yaml:"peer_election_timeout" env:"ETCD_PEER_ELECTION_TIMEOUT"`
PeerHeartbeatInterval int `yaml:"peer_heartbeat_interval" env:"ETCD_PEER_HEARTBEAT_INTERVAL"`
PeerKeyFile string `yaml:"peer_key_file" env:"ETCD_PEER_KEY_FILE"`
Peers string `yaml:"peers" env:"ETCD_PEERS"`
PeersFile string `yaml:"peers_file" env:"ETCD_PEERS_FILE"`
RetryInterval float64 `yaml:"retry_interval" env:"ETCD_RETRY_INTERVAL"`
Snapshot bool `yaml:"snapshot" env:"ETCD_SNAPSHOT"`
SnapshotCount int `yaml:"snapshot_count" env:"ETCD_SNAPSHOTCOUNT"`
StrTrace string `yaml:"trace" env:"ETCD_TRACE"`
Verbose bool `yaml:"verbose" env:"ETCD_VERBOSE"`
VeryVerbose bool `yaml:"very_verbose" env:"ETCD_VERY_VERBOSE"`
VeryVeryVerbose bool `yaml:"very_very_verbose" env:"ETCD_VERY_VERY_VERBOSE"`
}

25
config/file.go Normal file
View File

@@ -0,0 +1,25 @@
/*
Copyright 2014 CoreOS, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config
type File struct {
Encoding string `yaml:"-"`
Content string `yaml:"content"`
Owner string `yaml:"owner"`
Path string `yaml:"path"`
RawFilePermissions string `yaml:"permissions"`
}

Some files were not shown because too many files have changed in this diff Show More