Compare commits

..

166 Commits

Author SHA1 Message Date
Alex Crawford
14cad6f7c3 coreos-cloudinit: bump to v1.3.3 2015-02-24 12:24:37 -08:00
Alex Crawford
6f188bd5d4 Merge pull request #319 from Vladimiroff/fix-cloudsigma-empty-ssh-keys
Make sure public ssh key is not empty from CloudSigma's server context
2015-02-23 10:58:17 -08:00
Alex Crawford
41832ab19e Merge pull request #320 from ibuildthecloud/typo-contents
Fix typo, "contents" should be "content"
2015-02-23 10:41:55 -08:00
Darren Shepherd
672e4c07af Fix typo, "contents" should be "content"
The validation of the encoding for write_files was looking
for a node named "contents" when the node name is "content"

Signed-off-by: Darren Shepherd <darren@rancher.com>
2015-02-23 09:17:21 -07:00
Kiril Vladimirov
be53013431 fix(datasource/CloudSigma): Add a test for an empty ssh key 2015-02-22 05:38:57 +02:00
Kiril Vladimirov
c30fc51b03 fix(datasource/CloudSigma): Make sure public ssh key is not empty
Even when public ssh key is not set by the user, CloudSigma's server
context has a key `meta.ssh_public_key` which is just an empty string.
So instead of just relying on the "comma ok" idiom I make sure the value
is not an empty string.
2015-02-21 19:31:01 +02:00
Rob Szumski
b429eaab84 docs: fix formatting 2015-02-18 15:05:28 -08:00
Alex Crawford
e0104e6d93 coreos-cloudinit: bump to v1.3.2+git 2015-02-18 11:13:34 -08:00
Alex Crawford
7bf9712724 coreos-cloudinit: bump to v1.3.2 2015-02-18 11:12:52 -08:00
Alex Crawford
78b0f82918 Merge pull request #318 from crawford/filesystem
configdrive: correct network config reading and improve tests
2015-02-17 13:40:05 -08:00
Alex Crawford
987aa21883 configdrive: check the network config path
Check to make sure that a network config path has been specified before
trying to read from it. Otherwise, it will end up trying to read a
directory.
2015-02-17 13:27:30 -08:00
Alex Crawford
47ac4f6931 test: add directory support to MockFilesystem 2015-02-17 13:27:30 -08:00
Alex Crawford
f8aa7a43b8 coreos-cloudinit: bump to v1.3.1+git 2015-02-13 10:25:01 -08:00
Alex Crawford
2fe0b0b2a8 coreos-cloudinit: bump to v1.3.1 2015-02-13 10:24:41 -08:00
Alex Crawford
19ce7ac849 Merge pull request #317 from crawford/json
configdrive: check metadata length before parsing
2015-02-13 10:22:57 -08:00
Alex Crawford
477053ffde configdrive: check metadata length before parsing
This was lost in some of the recent refactoring.
2015-02-13 10:20:08 -08:00
Alex Crawford
eb0d2dbfa3 coreos-cloudinit: bump to v1.3.0+git 2015-02-11 17:19:32 -08:00
Alex Crawford
18caa5bf07 coreos-cloudinit: bump to v1.3.0 2015-02-11 17:19:05 -08:00
Alex Crawford
a27bbb912f Merge pull request #315 from crawford/cloudsigma
oem: add CloudSigma OEM
2015-02-11 17:15:23 -08:00
Alex Crawford
3b2af743bd oem: add CloudSigma OEM 2015-02-11 17:03:24 -08:00
Alex Crawford
995bc63abe Merge pull request #313 from jimmycuadra/etcd-2.0-configuration
Add etcd 2.0 configuration support
2015-02-09 12:33:28 -08:00
Alex Crawford
a28f870302 Merge pull request #314 from robszumski/master
docs: link to CoreUpdate and Omaha
2015-02-06 14:15:39 -08:00
Rob Szumski
a3357c273c docs: update links 2015-02-06 13:43:06 -08:00
Rob Szumski
080c698ec2 docs: link to CoreUpdate and Omaha 2015-02-06 13:31:50 -08:00
Jimmy Cuadra
afbf1dbb3a Add etcd 2.0 configuration flag support. 2015-02-05 02:57:11 -08:00
Alex Crawford
a275e18533 Merge pull request #308 from demonbane/github-capitalization
Fix GitHub capitalization
2015-01-28 14:28:53 -08:00
Alex Malinovich
cf3baa8805 Fix GitHub capitalization 2015-01-28 14:19:08 -08:00
Jonathan Boulle
ed84bcef04 Merge pull request #307 from jonboulle/master
docs: fix typo
2015-01-28 10:25:15 -08:00
Jonathan Boulle
7d8b29e597 docs: fix typo 2015-01-28 10:19:16 -08:00
Alex Crawford
4eaaa5c927 Merge pull request #304 from crawford/metadata
datasource: improve metadata handling
2015-01-27 14:36:12 -08:00
Alex Crawford
536f8acf2a config: remove network config from CloudConfig 2015-01-26 17:35:08 -08:00
Alex Crawford
9605b5edf2 datasource: remove FetchNetworkConfig step
Its easier to let each datasource grab all metadata in the FetchMetadata
stage than to break it into multiple stages.
2015-01-26 16:08:26 -08:00
Alex Crawford
42153edbbc initialize: change env to process from metadata
We don't use general substitutions so just have env pick the values it
wants to substitute.
2015-01-26 16:08:26 -08:00
Alex Crawford
650a239fdb metadata: simplify merging of metadata
Add an internal field for CloudConfig to make it easier to distinguish.
Instead of creating two CloudConfigs and merging them, just merge the
metadata into the existing CloudConfig.
2015-01-26 16:08:26 -08:00
Alex Crawford
3e47c09b41 datasource: replace metadata map with struct
The loosely-typed metadata map is a load of crap. Make it a struct and
let the compiler help us out.
2015-01-26 15:57:57 -08:00
Alex Crawford
d4c617fc23 config: standardize interface a bit 2015-01-26 15:57:55 -08:00
Alex Crawford
9441586229 test: check all root golang files 2015-01-26 15:57:54 -08:00
Alex Crawford
be62a1df66 test: DRY out MockFilesystem 2015-01-26 15:57:51 -08:00
Jonathan Boulle
c093e44049 Merge pull request #305 from jonboulle/copyright
*: switch to line comments for copyright
2015-01-25 10:28:52 -08:00
Jonathan Boulle
be68a8e5cc *: switch to line comments for copyright
Build tags are not compatible with block comments. Also adds copyright
header to a few places it was missing.
2015-01-24 19:32:33 -08:00
Alex Crawford
58b4de8093 coreos-cloudinit: bump to v1.2.1+git 2015-01-21 14:29:51 -08:00
Alex Crawford
ae3676096c coreos-cloudinit: bump to v1.2.1 2015-01-21 14:29:25 -08:00
Alex Crawford
a548b557ed doc: add coreos-ssh-import-github-users 2015-01-21 14:28:43 -08:00
Alex Crawford
a9c132a706 coreos-cloudinit: bump to v1.2.0+git 2015-01-20 14:42:48 -08:00
Alex Crawford
c3c4b86a3b coreos-cloudinit: bump to v1.2.0 2015-01-20 14:42:18 -08:00
Alex Crawford
44142ff8af Merge pull request #301 from crawford/github
config: add support for multiple github users
2015-01-20 14:26:38 -08:00
Alex Crawford
e9529ede44 config: add support for multiple github users 2015-01-20 13:45:52 -08:00
Alex Crawford
4b5b801171 Merge pull request #295 from crawford/rules
config/validate: add some sanity checks
2015-01-15 16:25:37 -08:00
Alex Crawford
551cbb1e5d config/validate: add rule for file encoding 2015-01-15 15:01:44 -08:00
Alex Crawford
3c93938f8a decode: refactor file decoding into config package 2015-01-15 15:01:44 -08:00
Alex Crawford
f61c08c246 config/validate: add rule for coreos.write_files 2015-01-15 15:01:44 -08:00
Alex Crawford
571903cec6 config/validate: add rule for etcd discovery token 2015-01-15 15:01:44 -08:00
Alex Crawford
bdbd1930ed config/validate: add rule for file paths 2015-01-15 15:01:44 -08:00
Alex Crawford
cc75a943ba Merge pull request #300 from crawford/float
validate: allow promotion of int to float64
2015-01-15 13:17:44 -08:00
Alex Crawford
fc77ba6355 validate: allow promotion of int to float64 2015-01-14 17:54:01 -08:00
Eugene Yakubovich
7cfa0df7c4 Merge pull request #299 from eyakubovich/master
config: document and update flannel config values
2015-01-13 15:55:42 -08:00
Eugene Yakubovich
58f0dadaf9 config: document and update flannel config values
Fixes #297
2015-01-13 15:51:47 -08:00
Michael Marineau
1ab530f157 Merge pull request #293 from realestate-com-au/master
select first available hostname returned by EC2 metadata.
2015-01-05 12:47:27 -08:00
Kevin Yung
13e4b77130 ec2: allow spaces seperated hostname in metadata
AWS hostname metadata will return space seperated hostname and domain
names when DHCPOptionSet is using multiple domain names. This patch will
cater for this scenario.
2015-01-05 16:01:57 +11:00
Alex Crawford
54c62cbb70 coreos-cloudinit: bump to v1.1.0+git 2014-12-30 16:52:10 +01:00
Alex Crawford
c8e864fef5 coreos-cloudinit: bump to v1.1.0 2014-12-30 16:51:24 +01:00
Alex Crawford
60a3377e7c Merge pull request #290 from crawford/yaml
Improved YAML parsing
2014-12-30 16:24:12 +01:00
Alex Crawford
5527f09778 config: fix parsing of file permissions
These reintroduces the braindead '744' syntax for file permissions. Even
though this number isn't octal, it is assumed by convention to be. In
order to pull this off, coerceNodes() was introduced to try to
counteract the type inferrencing that occurs during the yaml
unmarshalling. The config is unmarshalled twice: once into an empty
interface and once into the CloudConfig structure. The two resulting
node structures are combined together. The nodes from the CloudConfig
process replace those from the interface{} when the types of the two
nodes are compatible. For example, with the input `0744`, yaml
interprets that as the integer 484 giving us the nodes '0744'(string)
and 484(int). Because the types string and int are compatible, we opt to
take the string node instead of the integer.
2014-12-30 16:20:21 +01:00
Alex Crawford
54a64454b9 validate: fix printing for non-string values 2014-12-30 16:20:21 +01:00
Alex Crawford
0e70d4f01f config: add validity check for file permissions 2014-12-30 16:20:21 +01:00
Alex Crawford
af8e590575 config: change valid tag to use regexp
A regular expression is much more useful than a list of strings.
2014-12-30 16:20:21 +01:00
Alex Crawford
40d943fb7a reboot-strategy: remove the 'false' value
Since we no longer do a two-stage unmarshal, the 'false' value will no
longer be necessary.
2014-12-30 16:20:21 +01:00
Alex Crawford
248536a5cd config: use a YAML transform to normalize keys
This removes the problematic two-pass unmarshalling.
2014-12-30 16:20:21 +01:00
Alex Crawford
4ed1d03c97 godeps: bump github.com/coreos/yaml 2014-12-30 16:20:20 +01:00
Alex Crawford
057ab37364 config: seperate the CoreOS type from CloudConfig
Renamed Coreos to CoreOS while I was at it.
2014-12-30 16:20:20 +01:00
Alex Crawford
182241c8d3 config: clean up and remove some tests
Small modification to make these align with our test-table-style tests.
Also removed TestCloudConfigInvalidKeys since it hasn't been a useful
test since d3294bcb86.
2014-12-30 16:19:00 +01:00
Michael Marineau
edced59fa6 Merge pull request #281 from thommay/flannel_env_file
Create an environment file for flannel
2014-12-29 15:07:08 -08:00
Thom May
9be836df31 Create an environment file for flannel
Rather than using a systemd overlay, allow docker to load the
environment file. This is due to coreos/coreos-overlay#1002
2014-12-29 10:27:22 +00:00
Jonathan Boulle
4e54447b8e Merge pull request #286 from jonboulle/master
Godeps: switch to coreos/yaml
2014-12-20 15:43:55 -08:00
Jonathan Boulle
999c38b09b Godeps: switch to coreos/yaml 2014-12-20 15:31:02 -08:00
Alex Crawford
06d13de5c3 coreos-cloudinit: bump to v1.0.2+git 2014-12-12 17:38:28 -08:00
Alex Crawford
5b0903d162 coreos-cloudinit: bump to v1.0.2 2014-12-12 17:37:39 -08:00
Alex Crawford
10669be7c0 Merge pull request #284 from crawford/travis
test: disable Travis sudo capability
2014-12-12 17:28:47 -08:00
Alex Crawford
2edae741e1 test: disable Travis sudo capability 2014-12-12 16:46:18 -08:00
Alex Crawford
ea90e553d1 Merge pull request #282 from crawford/network
network: write network units with user units
2014-12-12 15:12:50 -08:00
Alex Crawford
b0cfd86902 network: write network units with user units
This allows us to test the network unit generation as well as removing
some special-cased code.
2014-12-12 15:08:03 -08:00
Alex Crawford
565a9540c9 Merge pull request #283 from crawford/validate
validate: empty user_data is also valid
2014-12-12 15:05:51 -08:00
Alex Crawford
fd10e27b99 validate: empty user_data is also valid 2014-12-12 14:49:42 -08:00
Michael Marineau
39763d772c Merge pull request #280 from marineam/go1.4
travis: Add Go 1.4 as a test target
2014-12-11 16:34:00 -08:00
Michael Marineau
ee69b77bfb travis: Add Go 1.4 as a test target 2014-12-11 15:29:36 -08:00
Jonathan Boulle
353444e56d Merge pull request #279 from cnelson/write_files_encoding_support
Add support for the encoding key to write_files
2014-12-09 17:37:34 -08:00
cnelson
112ba1e31f Added support for the encoding key in write_files
Supported encodings are base64, gzip, and base64 encoded gzip
2014-12-09 17:35:33 -08:00
cnelson
9c3cd9e69c bumped version of yaml.v1 2014-12-09 07:49:59 -08:00
Alex Crawford
685d8317bc Merge pull request #275 from mwhooker/master
Enable configuration of locksmithd
2014-12-05 14:09:40 -08:00
Matthew Hooker
f42d102b26 Also fix wording of Flannel section 2014-12-04 23:55:26 -08:00
Matthew Hooker
c944e9ef94 Enable configuration of locksmithd 2014-12-04 23:53:31 -08:00
Alex Crawford
f10d6e8bef coreos-cloudinit: bump to v1.0.1+git 2014-12-04 16:35:20 -08:00
Alex Crawford
f3f3af79fd coreos-cloudinit: bump to v1.0.1 2014-12-04 16:34:57 -08:00
Alex Crawford
0e63aa0f6b Merge pull request #276 from crawford/networkd
initialize: restart networkd before other units
2014-12-04 16:33:02 -08:00
Alex Crawford
b254e17e89 Merge pull request #263 from robszumski/docs-validator
docs: link to validator
2014-12-04 16:28:21 -08:00
Alex Crawford
5c059b66f0 initialize: restart networkd before other units 2014-12-04 15:25:44 -08:00
Alex Crawford
c628bef666 Merge pull request #273 from crawford/networkd
initialize: only restart networkd once per config
2014-12-02 12:54:27 -08:00
Alex Crawford
2270db3f7a initialize: only restart networkd once per config
Regression introduced by 9fcf338bf3.

Networkd was erroneously being restarted once per network unit. It
should only restart once for the entire config.
2014-12-02 12:46:35 -08:00
Alex Crawford
d0d467813d Merge pull request #251 from Vladimiroff/master
metadata: Populate CloudSigma's IPs properly
2014-12-01 14:52:11 -08:00
Alex Crawford
123f111efe coreos-cloudinit: bump to 1.0.0+git 2014-11-26 14:19:29 -08:00
Alex Crawford
521ecfdab5 coreos-cloudinit: bump to 1.0.0 2014-11-26 14:19:13 -08:00
Alex Crawford
6d0fdf1a47 Merge pull request #268 from crawford/dropins
drop-in: add support for drop-ins
2014-11-26 14:14:49 -08:00
Alex Crawford
ffc54b028c drop-in: add support for drop-ins
This allows a list of drop-ins for a unit to be declared inline within a
cloud-config. For example:

  #cloud-config
  coreos:
    units:
      - name: docker.service
        drop-ins:
          - name: 50-insecure-registry.conf
            content: |
              [Service]
              Environment=DOCKER_OPTS='--insecure-registry="10.0.1.0/24"'
2014-11-26 14:09:35 -08:00
Alex Crawford
420f7cf202 system: clean up TestPlaceUnit() 2014-11-26 10:32:43 -08:00
Alex Crawford
624df676d0 config/unit: move Type() and Group() out of config 2014-11-26 10:32:43 -08:00
Alex Crawford
75ed8dacf9 initialize: clean up TestProcessUnits() 2014-11-26 10:32:43 -08:00
Alex Crawford
dcaabe4d4a system: clean up UnitManager interface 2014-11-26 10:32:43 -08:00
Alex Crawford
92c57423ba Merge pull request #269 from crawford/valid
validate: promote invalid values to an error
2014-11-26 10:32:27 -08:00
Alex Crawford
7447e133c9 validate: promote invalid values to an error 2014-11-26 10:29:09 -08:00
Eugene Yakubovich
4e466c12da Merge pull request #267 from thommay/flannel_unit
the flannel service is called flanneld
2014-11-25 12:30:58 -08:00
Thom May
333468dba3 the flannel service is called flanneld 2014-11-25 14:00:53 +00:00
Alex Crawford
55c3a793ad coreos-cloudinit: bump to 0.11.4+git 2014-11-21 20:11:54 -08:00
Alex Crawford
eca51031c8 coreos-cloudinit: bump to 0.11.4 2014-11-21 20:11:37 -08:00
Alex Crawford
19522bcb82 Merge pull request #266 from crawford/config
config: update configs to match etcd, fleet, and flannel
2014-11-21 20:10:34 -08:00
Alex Crawford
62248ea33d config/fleet: fix configs
Added EtcdKeyPrefix and fixed the types of EngineReconcileInterval and EtcdRequestTimeout.
2014-11-21 16:57:00 -08:00
Alex Crawford
d2a19cc86d config/flannel: correct - vs _ 2014-11-21 16:57:00 -08:00
Alex Crawford
08131ffab1 config/etcd: fix configs
This new table is pulled from the etcd codebase rather than the docs...

Added:
 GraphiteHost
 PeerElectionTimeout
 PeerHeartbeatInterval
 PeerKeyFile
 RetryInterval
 SnapshotCount
 StrTrace
 VeryVeryVerbose

Fixed types:
 ClusterActiveSize
 ClusterRemoveDelay
 ClusterSyncInterval
 HTTPReadTimeout
 HTTPWriteTimeout
 MaxResultBuffer
 MaxRetryAttempts
 Snapshot
 Verbose
 VeryVerbose

Renamed:
 Cors

Removed:
 MaxClusterSize
 CPUProfileFile
2014-11-21 16:57:00 -08:00
Alex Crawford
4a0019c669 config: add support for float64 2014-11-21 16:13:49 -08:00
Alex Crawford
3275ead1ec coreos-cloudinit: bump to 0.11.3+git 2014-11-21 12:25:26 -08:00
Alex Crawford
32b6a55724 coreos-cloudinit: bump to 0.11.3 2014-11-21 12:25:04 -08:00
Alex Crawford
6c43644369 Merge pull request #265 from crawford/update
config/update: add "off" as a valid strategy
2014-11-21 12:22:45 -08:00
Alex Crawford
e6593d49e6 config/update: add "off" as a valid strategy
It was assumed that the user would specify the reboot strategy as an
unquoted value. In the case that they turn off updates, `off` is
interpreted as a boolean and the normalization pass converts that to
`false`. In the event that the user uses `"off"`, it's interpreted as a
string and not modified.
2014-11-21 10:41:03 -08:00
Alex Crawford
ab752b239f coreos-cloudinit: bump to 0.11.2+git 2014-11-20 11:29:25 -08:00
Alex Crawford
0742e4d357 coreos-cloudinit: bump to 0.11.2 2014-11-20 11:29:12 -08:00
Alex Crawford
78f586ec9e Merge pull request #262 from crawford/permissions
config: fix parsing of file permissions
2014-11-20 11:28:11 -08:00
Alex Crawford
6f91b76d79 docs: correct type of permissions 2014-11-20 11:14:44 -08:00
Alex Crawford
5c80ccacc4 config: fix parsing of file permissions
The file permissions can be specified (unfortunately) as a string or an
octal integer. During the normalization step, every field is
unmarshalled into an interface{}. String types are kept in tact but
integers are converted to decimal integers. If the raw config
represented the permissions as an octal, it would be converted to
decimal _before_ it was saved to RawFilePermissions. Permissions() would
then try to convert it again, assuming it was an octal. The new behavior
doesn't assume the radix of the number, allowing decimal and octal
input.
2014-11-20 11:14:44 -08:00
Rob Szumski
44fdf95d99 docs: mention validate flag 2014-11-20 11:12:31 -08:00
Rob Szumski
0a62614eec docs: link to validator 2014-11-20 10:58:57 -08:00
Alex Crawford
97758b343b coreos-cloudinit: bump to 0.11.1+git 2014-11-18 12:14:34 -08:00
Alex Crawford
fb6f52b360 coreos-cloudinit: bump to 0.11.1 2014-11-18 12:14:29 -08:00
Alex Crawford
786cd2a539 Merge pull request #259 from crawford/hyphen
config/validate: disable - vs _ message for now
2014-11-18 12:12:26 -08:00
Alex Crawford
45793f1254 config/validate: disable - vs _ message for now 2014-11-18 12:11:50 -08:00
Alex Crawford
b621756d92 Merge pull request #258 from crawford/header
config/validate: fix line number for header check
2014-11-18 12:11:35 -08:00
Alex Crawford
a5b5c700a6 config/validate: fix line number for header check 2014-11-18 12:02:23 -08:00
Kiril Vladimirov
ea95920f31 fix(datasource/CloudSigma): Make sure DHCP has run 2014-11-17 15:35:10 +02:00
Alex Crawford
d7602f3c08 Merge pull request #244 from eyakubovich/master
flannel: added flannel support and helper to make dropins
2014-11-14 10:46:19 -08:00
Eugene Yakubovich
a20addd05e flannel: added flannel support and helper to make dropins
fleet, flannel, and etcd all generate dropins from config.
To reduce code duplication, factor out a helper to do that.
2014-11-14 10:45:23 -08:00
Alex Crawford
d9d89a6fa0 coreos-cloudinit: bump to 0.11.0+git 2014-11-14 10:42:00 -08:00
Alex Crawford
3c26376326 coreos-cloudinit: bump to 0.11.0 2014-11-14 10:41:47 -08:00
Alex Crawford
d3294bcb86 Merge pull request #254 from crawford/validator
config: add new validator
2014-11-12 17:40:16 -08:00
Alex Crawford
dda314b518 flags: add validate flag
This will allow the user to run a standalone validation.
2014-11-12 16:48:57 -08:00
Alex Crawford
055a3c339a config/validate: add new config validator
This validator is still experimental and is going to need new rules in the
future. This lays out the general framework.
2014-11-12 16:48:57 -08:00
Alex Crawford
51f37100a1 config: remove config validator 2014-11-07 10:18:08 -08:00
Alex Crawford
88e8265cd6 config: seperate AssertValid and AssertStructValid
Added an error structure to make it possible to get the specifics of the failure.
2014-11-07 10:14:34 -08:00
Alex Crawford
6e2db882e6 script: move Script into config package 2014-11-07 10:13:52 -08:00
Alex Crawford
3e2823df1b Merge pull request #256 from crawford/hyphen
config: deprecate - in favor of _ for key names
2014-11-03 14:54:23 -08:00
Alex Crawford
46cb51cf91 Merge pull request #257 from crawford/networkd
networkd: remove double-restart workaround
2014-11-03 14:25:38 -08:00
Alex Crawford
1a6cee5305 networkd: remove double-restart workaround
The kernel fixed the underlying issue in 763e0ec and e721f87.
2014-11-03 14:11:15 -08:00
Alex Crawford
d02aa18839 config: deprecate - in favor of _ for key names
In all of the YAML tags, - has been replaced with _. normalizeConfig() and
normalizeKeys() have also been added to perform the normalization of the input
cloud-config.

As part of the normalization process, falsey values are converted to "false".
The "off" update strategy is no exception and as a result the "off" update
strategy has been changed to "false".
2014-11-03 12:09:52 -08:00
Alex Crawford
e9bda98b54 Merge pull request #252 from crawford/vet
go vet
2014-10-23 12:03:01 -07:00
Alex Crawford
badc874b74 travis: install go vet 2014-10-23 11:47:24 -07:00
Alex Crawford
c9e8c887b8 test: run go vet 2014-10-23 11:46:40 -07:00
Alex Crawford
8be307de49 *: fix warnings from go vet 2014-10-23 11:46:08 -07:00
Alex Crawford
562c474275 system: embed config within EtcHosts and Update 2014-10-23 11:44:15 -07:00
Kiril Vladimirov
b6062f0644 fix(datasource/CloudSigma): Populate local IPv4 address properly 2014-10-23 15:03:23 +03:00
Kiril Vladimirov
c5fada6e69 fix(datasource/CloudSigma): Populate public IPv4 address properly 2014-10-23 13:21:49 +03:00
Jonathan Boulle
5c5834863b Merge pull request #250 from jonboulle/master
*: switch to Godeps
2014-10-20 12:09:04 -07:00
Jonathan Boulle
44f0a949c5 *: switch to Godeps 2014-10-20 12:04:03 -07:00
Jonathan Boulle
106c4e7a2c Merge pull request #249 from jonboulle/license_header
*: add license header to all source files
2014-10-17 15:42:20 -07:00
Jonathan Boulle
6c1ba590aa *: add license header to all source files 2014-10-17 15:36:22 -07:00
Alex Crawford
45da664c59 Merge pull request #246 from crawford/master
Add support for Azure
2014-10-12 21:37:34 -07:00
Alex Crawford
2a71551ef2 azure: add support for azure (via azure agent) 2014-10-11 09:19:47 -07:00
Alex Crawford
84e1cb3242 datasource/waagent: add support for WAAgent metadata 2014-10-11 09:19:47 -07:00
Jonathan Boulle
5214ead926 Merge pull request #245 from jonboulle/units
init: simplify CloudConfigUnit interface
2014-10-06 15:26:36 -07:00
Jonathan Boulle
e2c24c4cef init: simplify CloudConfigUnit interface 2014-10-06 15:14:29 -07:00
221 changed files with 7329 additions and 3977 deletions

View File

@@ -1,10 +1,17 @@
language: go language: go
go: sudo: false
- 1.3 matrix:
- 1.2 include:
- go: 1.4
env: TOOLS_CMD=golang.org/x/tools/cmd
- go: 1.3
env: TOOLS_CMD=code.google.com/p/go.tools/cmd
- go: 1.2
env: TOOLS_CMD=code.google.com/p/go.tools/cmd
install: install:
- go get code.google.com/p/go.tools/cmd/cover - go get ${TOOLS_CMD}/cover
- go get ${TOOLS_CMD}/vet
script: script:
- ./test - ./test

View File

@@ -1,6 +1,8 @@
# Using Cloud-Config # Using Cloud-Config
CoreOS allows you to declaratively customize various OS-level items, such as network configuration, user accounts, and systemd units. This document describes the full list of items we can configure. The `coreos-cloudinit` program uses these files as it configures the OS after startup or during runtime. Your cloud-config is processed during each boot. CoreOS allows you to declaratively customize various OS-level items, such as network configuration, user accounts, and systemd units. This document describes the full list of items we can configure. The `coreos-cloudinit` program uses these files as it configures the OS after startup or during runtime.
Your cloud-config is processed during each boot. Invalid cloud-config won't be processed but will be logged in the journal. You can validate your cloud-config with the [CoreOS validator]({{site.url}}/validate) or by running `coreos-cloudinit -validate`.
## Configuration File ## Configuration File
@@ -16,7 +18,7 @@ We've designed our implementation to allow the same cloud-config file to work ac
The cloud-config file uses the [YAML][yaml] file format, which uses whitespace and new-lines to delimit lists, associative arrays, and values. The cloud-config file uses the [YAML][yaml] file format, which uses whitespace and new-lines to delimit lists, associative arrays, and values.
A cloud-config file should contain `#cloud-config`, followed by an associative array which has zero or more of the following keys: A cloud-config file must contain `#cloud-config`, followed by an associative array which has zero or more of the following keys:
- `coreos` - `coreos`
- `ssh_authorized_keys` - `ssh_authorized_keys`
@@ -46,13 +48,13 @@ If the platform environment supports the templating feature of coreos-cloudinit
#cloud-config #cloud-config
coreos: coreos:
etcd: etcd:
name: node001 name: node001
# generate a new token for each unique cluster from https://discovery.etcd.io/new # generate a new token for each unique cluster from https://discovery.etcd.io/new
discovery: https://discovery.etcd.io/<token> discovery: https://discovery.etcd.io/<token>
# multi-region and multi-cloud deployments need to use $public_ipv4 # multi-region and multi-cloud deployments need to use $public_ipv4
addr: $public_ipv4:4001 addr: $public_ipv4:4001
peer-addr: $private_ipv4:7001 peer-addr: $private_ipv4:7001
``` ```
...will generate a systemd unit drop-in like this: ...will generate a systemd unit drop-in like this:
@@ -66,7 +68,6 @@ Environment="ETCD_PEER_ADDR=192.0.2.13:7001"
``` ```
For more information about the available configuration parameters, see the [etcd documentation][etcd-config]. For more information about the available configuration parameters, see the [etcd documentation][etcd-config].
Note that hyphens in the coreos.etcd.* keys are mapped to underscores.
_Note: The `$private_ipv4` and `$public_ipv4` substitution variables referenced in other documents are only supported on Amazon EC2, Google Compute Engine, OpenStack, Rackspace, DigitalOcean, and Vagrant._ _Note: The `$private_ipv4` and `$public_ipv4` substitution variables referenced in other documents are only supported on Amazon EC2, Google Compute Engine, OpenStack, Rackspace, DigitalOcean, and Vagrant._
@@ -80,9 +81,9 @@ The `coreos.fleet.*` parameters work very similarly to `coreos.etcd.*`, and allo
#cloud-config #cloud-config
coreos: coreos:
fleet: fleet:
public-ip: $public_ipv4 public-ip: $public_ipv4
metadata: region=us-west metadata: region=us-west
``` ```
...will generate a systemd unit drop-in like this: ...will generate a systemd unit drop-in like this:
@@ -97,6 +98,64 @@ For more information on fleet configuration, see the [fleet documentation][fleet
[fleet-config]: https://github.com/coreos/fleet/blob/master/Documentation/deployment-and-configuration.md#configuration [fleet-config]: https://github.com/coreos/fleet/blob/master/Documentation/deployment-and-configuration.md#configuration
#### flannel
The `coreos.flannel.*` parameters also work very similarly to `coreos.etcd.*`
and `coreos.fleet.*`. They can be used to set environment variables for
flanneld. For example, the following cloud-config...
```yaml
#cloud-config
coreos:
flannel:
etcd_prefix: /coreos.com/network2
```
...will generate a systemd unit drop-in like so:
```
[Service]
Environment="FLANNELD_ETCD_PREFIX=/coreos.com/network2"
```
List of flannel configuration parameters:
- **etcd_endpoints**: Comma separated list of etcd endpoints
- **etcd_cafile**: Path to CA file used for TLS communication with etcd
- **etcd_certfile**: Path to certificate file used for TLS communication with etcd
- **etcd_keyfile**: Path to private key file used for TLS communication with etcd
- **etcd_prefix**: Etcd prefix path to be used for flannel keys
- **ip_masq**: Install IP masquerade rules for traffic outside of flannel subnet
- **subnet_file**: Path to flannel subnet file to write out
- **interface**: Interface (name or IP) that should be used for inter-host communication
[flannel-readme]: https://github.com/coreos/flannel/blob/master/README.md
#### locksmith
The `coreos.locksmith.*` parameters can be used to set environment variables
for locksmith. For example, the following cloud-config...
```yaml
#cloud-config
coreos:
locksmith:
endpoint: example.com:4001
```
...will generate a systemd unit drop-in like so:
```
[Service]
Environment="LOCKSMITHD_ENDPOINT=example.com:4001"
```
For the complete list of locksmith configuration parameters, see the [locksmith documentation][locksmith-readme].
[locksmith-readme]: https://github.com/coreos/locksmith/blob/master/README.md
#### update #### update
The `coreos.update.*` parameters manipulate settings related to how CoreOS instances are updated. The `coreos.update.*` parameters manipulate settings related to how CoreOS instances are updated.
@@ -109,9 +168,12 @@ The `reboot-strategy` parameter also affects the behaviour of [locksmith](https:
- _etcd-lock_: Reboot after first taking a distributed lock in etcd, this guarantees that only one host will reboot concurrently and that the cluster will remain available during the update. - _etcd-lock_: Reboot after first taking a distributed lock in etcd, this guarantees that only one host will reboot concurrently and that the cluster will remain available during the update.
- _best-effort_ - If etcd is running, "etcd-lock", otherwise simply "reboot". - _best-effort_ - If etcd is running, "etcd-lock", otherwise simply "reboot".
- _off_ - Disable rebooting after updates are applied (not recommended). - _off_ - Disable rebooting after updates are applied (not recommended).
- **server**: is the omaha endpoint URL which will be queried for updates. - **server**: The location of the [CoreUpdate][coreupdate] server which will be queried for updates. Also known as the [omaha][omaha-docs] server endpoint.
- **group**: signifies the channel which should be used for automatic updates. This value defaults to the version of the image initially downloaded. (one of "master", "alpha", "beta", "stable") - **group**: signifies the channel which should be used for automatic updates. This value defaults to the version of the image initially downloaded. (one of "master", "alpha", "beta", "stable")
[coreupdate]: https://coreos.com/products/coreupdate
[omaha-docs]: https://coreos.com/docs/coreupdate/custom-apps/coreupdate-protocol/
*Note: cloudinit will only manipulate the locksmith unit file in the systemd runtime directory (`/run/systemd/system/locksmithd.service`). If any manual modifications are made to an overriding unit configuration file (e.g. `/etc/systemd/system/locksmithd.service`), cloudinit will no longer be able to control the locksmith service unit.* *Note: cloudinit will only manipulate the locksmith unit file in the systemd runtime directory (`/run/systemd/system/locksmithd.service`). If any manual modifications are made to an overriding unit configuration file (e.g. `/etc/systemd/system/locksmithd.service`), cloudinit will no longer be able to control the locksmith service unit.*
##### Example ##### Example
@@ -135,6 +197,10 @@ Each item is an object with the following fields:
- **content**: Plaintext string representing entire unit file. If no value is provided, the unit is assumed to exist already. - **content**: Plaintext string representing entire unit file. If no value is provided, the unit is assumed to exist already.
- **command**: Command to execute on unit: start, stop, reload, restart, try-restart, reload-or-restart, reload-or-try-restart. The default behavior is to not execute any commands. - **command**: Command to execute on unit: start, stop, reload, restart, try-restart, reload-or-restart, reload-or-try-restart. The default behavior is to not execute any commands.
- **mask**: Whether to mask the unit file by symlinking it to `/dev/null` (analogous to `systemctl mask <name>`). Note that unlike `systemctl mask`, **this will destructively remove any existing unit file** located at `/etc/systemd/system/<unit>`, to ensure that the mask succeeds. The default value is false. - **mask**: Whether to mask the unit file by symlinking it to `/dev/null` (analogous to `systemctl mask <name>`). Note that unlike `systemctl mask`, **this will destructively remove any existing unit file** located at `/etc/systemd/system/<unit>`, to ensure that the mask succeeds. The default value is false.
- **drop-ins**: A list of unit drop-ins with the following fields:
- **name**: String representing unit's name. Required.
- **content**: Plaintext string representing entire file. Required.
**NOTE:** The command field is ignored for all network, netdev, and link units. The systemd-networkd.service unit will be restarted in their place. **NOTE:** The command field is ignored for all network, netdev, and link units. The systemd-networkd.service unit will be restarted in their place.
@@ -146,19 +212,34 @@ Write a unit to disk, automatically starting it.
#cloud-config #cloud-config
coreos: coreos:
units: units:
- name: docker-redis.service - name: docker-redis.service
command: start command: start
content: | content: |
[Unit] [Unit]
Description=Redis container Description=Redis container
Author=Me Author=Me
After=docker.service After=docker.service
[Service] [Service]
Restart=always Restart=always
ExecStart=/usr/bin/docker start -a redis_server ExecStart=/usr/bin/docker start -a redis_server
ExecStop=/usr/bin/docker stop -t 2 redis_server ExecStop=/usr/bin/docker stop -t 2 redis_server
```
Add the DOCKER_OPTS environment variable to docker.service.
```yaml
#cloud-config
coreos:
units:
- name: docker.service
drop-ins:
- name: 50-insecure-registry.conf
content: |
[Service]
Environment=DOCKER_OPTS='--insecure-registry="10.0.1.0/24"'
``` ```
Start the built-in `etcd` and `fleet` services: Start the built-in `etcd` and `fleet` services:
@@ -167,11 +248,11 @@ Start the built-in `etcd` and `fleet` services:
#cloud-config #cloud-config
coreos: coreos:
units: units:
- name: etcd.service - name: etcd.service
command: start command: start
- name: fleet.service - name: fleet.service
command: start command: start
``` ```
### ssh_authorized_keys ### ssh_authorized_keys
@@ -213,7 +294,8 @@ All but the `passwd` and `ssh-authorized-keys` fields will be ignored if the use
- **groups**: Add user to these additional groups - **groups**: Add user to these additional groups
- **no-user-group**: Boolean. Skip default group creation. - **no-user-group**: Boolean. Skip default group creation.
- **ssh-authorized-keys**: List of public SSH keys to authorize for this user - **ssh-authorized-keys**: List of public SSH keys to authorize for this user
- **coreos-ssh-import-github**: Authorize SSH keys from Github user - **coreos-ssh-import-github**: Authorize SSH keys from GitHub user
- **coreos-ssh-import-github-users**: Authorize SSH keys from a list of GitHub users
- **coreos-ssh-import-url**: Authorize SSH keys imported from a url endpoint. - **coreos-ssh-import-url**: Authorize SSH keys imported from a url endpoint.
- **system**: Create the user as a system user. No home directory will be created. - **system**: Create the user as a system user. No home directory will be created.
- **no-log-init**: Boolean. Skip initialization of lastlog and faillog databases. - **no-log-init**: Boolean. Skip initialization of lastlog and faillog databases.
@@ -303,11 +385,13 @@ Each item in the list may have the following keys:
- **path**: Absolute location on disk where contents should be written - **path**: Absolute location on disk where contents should be written
- **content**: Data to write at the provided `path` - **content**: Data to write at the provided `path`
- **permissions**: String representing file permissions in octal notation (i.e. '0644') - **permissions**: Integer representing file permissions, typically in octal notation (i.e. 0644)
- **owner**: User and group that should own the file written to disk. This is equivalent to the `<user>:<group>` argument to `chown <user>:<group> <path>`. - **owner**: User and group that should own the file written to disk. This is equivalent to the `<user>:<group>` argument to `chown <user>:<group> <path>`.
- **encoding**: Optional. The encoding of the data in content. If not specified this defaults to the yaml document encoding (usually utf-8). Supported encoding types are:
- **b64, base64**: Base64 encoded content
- **gz, gzip**: gzip encoded content, for use with the !!binary tag
- **gz+b64, gz+base64, gzip+b64, gzip+base64**: Base64 encoded gzip content
Explicitly not implemented is the **encoding** attribute.
The **content** field must represent exactly what should be written to disk.
```yaml ```yaml
#cloud-config #cloud-config
@@ -322,6 +406,24 @@ write_files:
owner: root owner: root
content: | content: |
Good news, everyone! Good news, everyone!
- path: /tmp/like_this
permissions: 0644
owner: root
encoding: gzip
content: !!binary |
H4sIAKgdh1QAAwtITM5WyK1USMqvUCjPLMlQSMssS1VIya9KzVPIySwszS9SyCpNLwYARQFQ5CcAAAA=
- path: /tmp/or_like_this
permissions: 0644
owner: root
encoding: gzip+base64
content: |
H4sIAKgdh1QAAwtITM5WyK1USMqvUCjPLMlQSMssS1VIya9KzVPIySwszS9SyCpNLwYARQFQ5CcAAAA=
- path: /tmp/todolist
permissions: 0644
owner: root
encoding: base64
content: |
UGFjayBteSBib3ggd2l0aCBmaXZlIGRvemVuIGxpcXVvciBqdWdz
``` ```
### manage_etc_hosts ### manage_etc_hosts

34
Godeps/Godeps.json generated Normal file
View File

@@ -0,0 +1,34 @@
{
"ImportPath": "github.com/coreos/coreos-cloudinit",
"GoVersion": "go1.3.3",
"Packages": [
"./..."
],
"Deps": [
{
"ImportPath": "github.com/cloudsigma/cepgo",
"Rev": "1bfc4895bf5c4d3b599f3f6ee142299488c8739b"
},
{
"ImportPath": "github.com/coreos/go-systemd/dbus",
"Rev": "4fbc5060a317b142e6c7bfbedb65596d5f0ab99b"
},
{
"ImportPath": "github.com/coreos/yaml",
"Rev": "6b16a5714269b2f70720a45406b1babd947a17ef"
},
{
"ImportPath": "github.com/dotcloud/docker/pkg/netlink",
"Comment": "v0.11.1-359-g55d41c3e21e1",
"Rev": "55d41c3e21e1593b944c06196ffb2ac57ab7f653"
},
{
"ImportPath": "github.com/guelfey/go.dbus",
"Rev": "f6a3a2366cc39b8479cadc499d3c735fb10fbdda"
},
{
"ImportPath": "github.com/tarm/goserial",
"Rev": "cdabc8d44e8e84f58f18074ae44337e1f2f375b9"
}
]
}

5
Godeps/Readme generated Normal file
View File

@@ -0,0 +1,5 @@
This directory tree is generated automatically by godep.
Please do not edit.
See https://github.com/tools/godep for more information.

2
Godeps/_workspace/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,2 @@
/pkg
/bin

View File

@@ -0,0 +1,23 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*.test

View File

@@ -48,7 +48,7 @@ import (
"fmt" "fmt"
"runtime" "runtime"
"github.com/coreos/coreos-cloudinit/third_party/github.com/tarm/goserial" "github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/tarm/goserial"
) )
const ( const (

View File

@@ -23,7 +23,7 @@ import (
"strings" "strings"
"sync" "sync"
"github.com/coreos/coreos-cloudinit/third_party/github.com/guelfey/go.dbus" "github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/guelfey/go.dbus"
) )
const signalBuffer = 100 const signalBuffer = 100

View File

@@ -18,7 +18,7 @@ package dbus
import ( import (
"errors" "errors"
"github.com/coreos/coreos-cloudinit/third_party/github.com/guelfey/go.dbus" "github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/guelfey/go.dbus"
) )
func (c *Conn) initJobs() { func (c *Conn) initJobs() {
@@ -208,7 +208,7 @@ func (c *Conn) SetUnitProperties(name string, runtime bool, properties ...Proper
} }
func (c *Conn) GetUnitTypeProperty(unit string, unitType string, propertyName string) (*Property, error) { func (c *Conn) GetUnitTypeProperty(unit string, unitType string, propertyName string) (*Property, error) {
return c.getProperty(unit, "org.freedesktop.systemd1." + unitType, propertyName) return c.getProperty(unit, "org.freedesktop.systemd1."+unitType, propertyName)
} }
// ListUnits returns an array with all currently loaded units. Note that // ListUnits returns an array with all currently loaded units. Note that

View File

@@ -18,7 +18,7 @@ package dbus
import ( import (
"fmt" "fmt"
"github.com/coreos/coreos-cloudinit/third_party/github.com/guelfey/go.dbus" "github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/guelfey/go.dbus"
"math/rand" "math/rand"
"os" "os"
"path/filepath" "path/filepath"

View File

@@ -17,7 +17,7 @@ limitations under the License.
package dbus package dbus
import ( import (
"github.com/coreos/coreos-cloudinit/third_party/github.com/guelfey/go.dbus" "github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/guelfey/go.dbus"
) )
// From the systemd docs: // From the systemd docs:

View File

@@ -20,7 +20,7 @@ import (
"errors" "errors"
"time" "time"
"github.com/coreos/coreos-cloudinit/third_party/github.com/guelfey/go.dbus" "github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/guelfey/go.dbus"
) )
const ( const (
@@ -101,7 +101,7 @@ func (c *Conn) SubscribeUnits(interval time.Duration) (<-chan map[string]*UnitSt
// SubscribeUnitsCustom is like SubscribeUnits but lets you specify the buffer // SubscribeUnitsCustom is like SubscribeUnits but lets you specify the buffer
// size of the channels, the comparison function for detecting changes and a filter // size of the channels, the comparison function for detecting changes and a filter
// function for cutting down on the noise that your channel receives. // function for cutting down on the noise that your channel receives.
func (c *Conn) SubscribeUnitsCustom(interval time.Duration, buffer int, isChanged func(*UnitStatus, *UnitStatus) bool, filterUnit func (string) bool) (<-chan map[string]*UnitStatus, <-chan error) { func (c *Conn) SubscribeUnitsCustom(interval time.Duration, buffer int, isChanged func(*UnitStatus, *UnitStatus) bool, filterUnit func(string) bool) (<-chan map[string]*UnitStatus, <-chan error) {
old := make(map[string]*UnitStatus) old := make(map[string]*UnitStatus)
statusChan := make(chan map[string]*UnitStatus, buffer) statusChan := make(chan map[string]*UnitStatus, buffer)
errChan := make(chan error, buffer) errChan := make(chan error, buffer)

View File

@@ -1,3 +1,6 @@
Copyright (c) 2011-2014 - Canonical Inc.
This software is licensed under the LGPLv3, included below. This software is licensed under the LGPLv3, included below.
As a special exception to the GNU Lesser General Public License version 3 As a special exception to the GNU Lesser General Public License version 3

View File

@@ -1,3 +1,6 @@
Note: This is a fork of https://github.com/go-yaml/yaml. The following README
doesn't necessarily apply to this fork.
# YAML support for the Go language # YAML support for the Go language
Introduction Introduction
@@ -12,10 +15,10 @@ C library to parse and generate YAML data quickly and reliably.
Compatibility Compatibility
------------- -------------
The yaml package is almost compatible with YAML 1.1, including support for The yaml package supports most of YAML 1.1 and 1.2, including support for
anchors, tags, etc. There are still a few missing bits, such as document anchors, tags, map merging, etc. Multi-document unmarshalling is not yet
merging, base-60 floats (huh?), and multi-document unmarshalling. These implemented, and base-60 floats from YAML 1.1 are purposefully not
features are not hard to add, and will be introduced as necessary. supported since they're a poor design and are gone in YAML 1.2.
Installation and usage Installation and usage
---------------------- ----------------------

View File

@@ -1,6 +1,8 @@
package yaml package yaml
import ( import (
"encoding/base64"
"fmt"
"reflect" "reflect"
"strconv" "strconv"
"time" "time"
@@ -28,13 +30,15 @@ type node struct {
// Parser, produces a node tree out of a libyaml event stream. // Parser, produces a node tree out of a libyaml event stream.
type parser struct { type parser struct {
parser yaml_parser_t parser yaml_parser_t
event yaml_event_t event yaml_event_t
doc *node doc *node
transform transformString
} }
func newParser(b []byte) *parser { func newParser(b []byte, t transformString) *parser {
p := parser{} p := parser{transform: t}
if !yaml_parser_initialize(&p.parser) { if !yaml_parser_initialize(&p.parser) {
panic("Failed to initialize YAML emitter") panic("Failed to initialize YAML emitter")
} }
@@ -63,7 +67,7 @@ func (p *parser) destroy() {
func (p *parser) skip() { func (p *parser) skip() {
if p.event.typ != yaml_NO_EVENT { if p.event.typ != yaml_NO_EVENT {
if p.event.typ == yaml_STREAM_END_EVENT { if p.event.typ == yaml_STREAM_END_EVENT {
panic("Attempted to go past the end of stream. Corrupted value?") fail("Attempted to go past the end of stream. Corrupted value?")
} }
yaml_event_delete(&p.event) yaml_event_delete(&p.event)
} }
@@ -89,7 +93,7 @@ func (p *parser) fail() {
} else { } else {
msg = "Unknown problem parsing YAML content" msg = "Unknown problem parsing YAML content"
} }
panic(where + msg) fail(where + msg)
} }
func (p *parser) anchor(n *node, anchor []byte) { func (p *parser) anchor(n *node, anchor []byte) {
@@ -114,10 +118,9 @@ func (p *parser) parse() *node {
// Happens when attempting to decode an empty buffer. // Happens when attempting to decode an empty buffer.
return nil return nil
default: default:
panic("Attempted to parse unknown event: " + panic("Attempted to parse unknown event: " + strconv.Itoa(int(p.event.typ)))
strconv.Itoa(int(p.event.typ)))
} }
panic("Unreachable") panic("unreachable")
} }
func (p *parser) node(kind int) *node { func (p *parser) node(kind int) *node {
@@ -135,8 +138,7 @@ func (p *parser) document() *node {
p.skip() p.skip()
n.children = append(n.children, p.parse()) n.children = append(n.children, p.parse())
if p.event.typ != yaml_DOCUMENT_END_EVENT { if p.event.typ != yaml_DOCUMENT_END_EVENT {
panic("Expected end of document event but got " + panic("Expected end of document event but got " + strconv.Itoa(int(p.event.typ)))
strconv.Itoa(int(p.event.typ)))
} }
p.skip() p.skip()
return n return n
@@ -175,7 +177,10 @@ func (p *parser) mapping() *node {
p.anchor(n, p.event.anchor) p.anchor(n, p.event.anchor)
p.skip() p.skip()
for p.event.typ != yaml_MAPPING_END_EVENT { for p.event.typ != yaml_MAPPING_END_EVENT {
n.children = append(n.children, p.parse(), p.parse()) key := p.parse()
key.value = p.transform(key.value)
value := p.parse()
n.children = append(n.children, key, value)
} }
p.skip() p.skip()
return n return n
@@ -218,7 +223,7 @@ func (d *decoder) setter(tag string, out *reflect.Value, good *bool) (set func()
var arg interface{} var arg interface{}
*out = reflect.ValueOf(&arg).Elem() *out = reflect.ValueOf(&arg).Elem()
return func() { return func() {
*good = setter.SetYAML(tag, arg) *good = setter.SetYAML(shortTag(tag), arg)
} }
} }
} }
@@ -226,7 +231,7 @@ func (d *decoder) setter(tag string, out *reflect.Value, good *bool) (set func()
for again { for again {
again = false again = false
setter, _ := (*out).Interface().(Setter) setter, _ := (*out).Interface().(Setter)
if tag != "!!null" || setter != nil { if tag != yaml_NULL_TAG || setter != nil {
if pv := (*out); pv.Kind() == reflect.Ptr { if pv := (*out); pv.Kind() == reflect.Ptr {
if pv.IsNil() { if pv.IsNil() {
*out = reflect.New(pv.Type().Elem()).Elem() *out = reflect.New(pv.Type().Elem()).Elem()
@@ -242,7 +247,7 @@ func (d *decoder) setter(tag string, out *reflect.Value, good *bool) (set func()
var arg interface{} var arg interface{}
*out = reflect.ValueOf(&arg).Elem() *out = reflect.ValueOf(&arg).Elem()
return func() { return func() {
*good = setter.SetYAML(tag, arg) *good = setter.SetYAML(shortTag(tag), arg)
} }
} }
} }
@@ -279,10 +284,10 @@ func (d *decoder) document(n *node, out reflect.Value) (good bool) {
func (d *decoder) alias(n *node, out reflect.Value) (good bool) { func (d *decoder) alias(n *node, out reflect.Value) (good bool) {
an, ok := d.doc.anchors[n.value] an, ok := d.doc.anchors[n.value]
if !ok { if !ok {
panic("Unknown anchor '" + n.value + "' referenced") fail("Unknown anchor '" + n.value + "' referenced")
} }
if d.aliases[n.value] { if d.aliases[n.value] {
panic("Anchor '" + n.value + "' value contains itself") fail("Anchor '" + n.value + "' value contains itself")
} }
d.aliases[n.value] = true d.aliases[n.value] = true
good = d.unmarshal(an, out) good = d.unmarshal(an, out)
@@ -290,23 +295,50 @@ func (d *decoder) alias(n *node, out reflect.Value) (good bool) {
return good return good
} }
var zeroValue reflect.Value
func resetMap(out reflect.Value) {
for _, k := range out.MapKeys() {
out.SetMapIndex(k, zeroValue)
}
}
var durationType = reflect.TypeOf(time.Duration(0)) var durationType = reflect.TypeOf(time.Duration(0))
func (d *decoder) scalar(n *node, out reflect.Value) (good bool) { func (d *decoder) scalar(n *node, out reflect.Value) (good bool) {
var tag string var tag string
var resolved interface{} var resolved interface{}
if n.tag == "" && !n.implicit { if n.tag == "" && !n.implicit {
tag = "!!str" tag = yaml_STR_TAG
resolved = n.value resolved = n.value
} else { } else {
tag, resolved = resolve(n.tag, n.value) tag, resolved = resolve(n.tag, n.value)
if tag == yaml_BINARY_TAG {
data, err := base64.StdEncoding.DecodeString(resolved.(string))
if err != nil {
fail("!!binary value contains invalid base64 data")
}
resolved = string(data)
}
} }
if set := d.setter(tag, &out, &good); set != nil { if set := d.setter(tag, &out, &good); set != nil {
defer set() defer set()
} }
if resolved == nil {
if out.Kind() == reflect.Map && !out.CanAddr() {
resetMap(out)
} else {
out.Set(reflect.Zero(out.Type()))
}
good = true
return
}
switch out.Kind() { switch out.Kind() {
case reflect.String: case reflect.String:
if resolved != nil { if tag == yaml_BINARY_TAG {
out.SetString(resolved.(string))
good = true
} else if resolved != nil {
out.SetString(n.value) out.SetString(n.value)
good = true good = true
} }
@@ -380,17 +412,11 @@ func (d *decoder) scalar(n *node, out reflect.Value) (good bool) {
good = true good = true
} }
case reflect.Ptr: case reflect.Ptr:
switch resolved.(type) { if out.Type().Elem() == reflect.TypeOf(resolved) {
case nil: elem := reflect.New(out.Type().Elem())
out.Set(reflect.Zero(out.Type())) elem.Elem().Set(reflect.ValueOf(resolved))
out.Set(elem)
good = true good = true
default:
if out.Type().Elem() == reflect.TypeOf(resolved) {
elem := reflect.New(out.Type().Elem())
elem.Elem().Set(reflect.ValueOf(resolved))
out.Set(elem)
good = true
}
} }
} }
return good return good
@@ -404,7 +430,7 @@ func settableValueOf(i interface{}) reflect.Value {
} }
func (d *decoder) sequence(n *node, out reflect.Value) (good bool) { func (d *decoder) sequence(n *node, out reflect.Value) (good bool) {
if set := d.setter("!!seq", &out, &good); set != nil { if set := d.setter(yaml_SEQ_TAG, &out, &good); set != nil {
defer set() defer set()
} }
var iface reflect.Value var iface reflect.Value
@@ -433,7 +459,7 @@ func (d *decoder) sequence(n *node, out reflect.Value) (good bool) {
} }
func (d *decoder) mapping(n *node, out reflect.Value) (good bool) { func (d *decoder) mapping(n *node, out reflect.Value) (good bool) {
if set := d.setter("!!map", &out, &good); set != nil { if set := d.setter(yaml_MAP_TAG, &out, &good); set != nil {
defer set() defer set()
} }
if out.Kind() == reflect.Struct { if out.Kind() == reflect.Struct {
@@ -465,6 +491,13 @@ func (d *decoder) mapping(n *node, out reflect.Value) (good bool) {
} }
k := reflect.New(kt).Elem() k := reflect.New(kt).Elem()
if d.unmarshal(n.children[i], k) { if d.unmarshal(n.children[i], k) {
kkind := k.Kind()
if kkind == reflect.Interface {
kkind = k.Elem().Kind()
}
if kkind == reflect.Map || kkind == reflect.Slice {
fail(fmt.Sprintf("invalid map key: %#v", k.Interface()))
}
e := reflect.New(et).Elem() e := reflect.New(et).Elem()
if d.unmarshal(n.children[i+1], e) { if d.unmarshal(n.children[i+1], e) {
out.SetMapIndex(k, e) out.SetMapIndex(k, e)
@@ -511,28 +544,28 @@ func (d *decoder) merge(n *node, out reflect.Value) {
case aliasNode: case aliasNode:
an, ok := d.doc.anchors[n.value] an, ok := d.doc.anchors[n.value]
if ok && an.kind != mappingNode { if ok && an.kind != mappingNode {
panic(wantMap) fail(wantMap)
} }
d.unmarshal(n, out) d.unmarshal(n, out)
case sequenceNode: case sequenceNode:
// Step backwards as earlier nodes take precedence. // Step backwards as earlier nodes take precedence.
for i := len(n.children)-1; i >= 0; i-- { for i := len(n.children) - 1; i >= 0; i-- {
ni := n.children[i] ni := n.children[i]
if ni.kind == aliasNode { if ni.kind == aliasNode {
an, ok := d.doc.anchors[ni.value] an, ok := d.doc.anchors[ni.value]
if ok && an.kind != mappingNode { if ok && an.kind != mappingNode {
panic(wantMap) fail(wantMap)
} }
} else if ni.kind != mappingNode { } else if ni.kind != mappingNode {
panic(wantMap) fail(wantMap)
} }
d.unmarshal(ni, out) d.unmarshal(ni, out)
} }
default: default:
panic(wantMap) fail(wantMap)
} }
} }
func isMerge(n *node) bool { func isMerge(n *node) bool {
return n.kind == scalarNode && n.value == "<<" && (n.implicit == true || n.tag == "!!merge" || n.tag == "tag:yaml.org,2002:merge") return n.kind == scalarNode && n.value == "<<" && (n.implicit == true || n.tag == yaml_MERGE_TAG)
} }

View File

@@ -1,10 +1,11 @@
package yaml_test package yaml_test
import ( import (
"github.com/coreos/yaml"
. "gopkg.in/check.v1" . "gopkg.in/check.v1"
"gopkg.in/yaml.v1"
"math" "math"
"reflect" "reflect"
"strings"
"time" "time"
) )
@@ -316,7 +317,10 @@ var unmarshalTests = []struct {
map[string]*string{"foo": new(string)}, map[string]*string{"foo": new(string)},
}, { }, {
"foo: null", "foo: null",
map[string]string{}, map[string]string{"foo": ""},
}, {
"foo: null",
map[string]interface{}{"foo": nil},
}, },
// Ignored field // Ignored field
@@ -372,11 +376,29 @@ var unmarshalTests = []struct {
map[string]time.Duration{"a": 3 * time.Second}, map[string]time.Duration{"a": 3 * time.Second},
}, },
// Issue #24. // Issue #24.
{ {
"a: <foo>", "a: <foo>",
map[string]string{"a": "<foo>"}, map[string]string{"a": "<foo>"},
}, },
// Base 60 floats are obsolete and unsupported.
{
"a: 1:1\n",
map[string]string{"a": "1:1"},
},
// Binary data.
{
"a: !!binary gIGC\n",
map[string]string{"a": "\x80\x81\x82"},
}, {
"a: !!binary |\n " + strings.Repeat("kJCQ", 17) + "kJ\n CQ\n",
map[string]string{"a": strings.Repeat("\x90", 54)},
}, {
"a: !!binary |\n " + strings.Repeat("A", 70) + "\n ==\n",
map[string]string{"a": strings.Repeat("\x00", 52)},
},
} }
type inlineB struct { type inlineB struct {
@@ -424,12 +446,15 @@ func (s *S) TestUnmarshalNaN(c *C) {
var unmarshalErrorTests = []struct { var unmarshalErrorTests = []struct {
data, error string data, error string
}{ }{
{"v: !!float 'error'", "YAML error: Can't decode !!str 'error' as a !!float"}, {"v: !!float 'error'", "YAML error: cannot decode !!str `error` as a !!float"},
{"v: [A,", "YAML error: line 1: did not find expected node content"}, {"v: [A,", "YAML error: line 1: did not find expected node content"},
{"v:\n- [A,", "YAML error: line 2: did not find expected node content"}, {"v:\n- [A,", "YAML error: line 2: did not find expected node content"},
{"a: *b\n", "YAML error: Unknown anchor 'b' referenced"}, {"a: *b\n", "YAML error: Unknown anchor 'b' referenced"},
{"a: &a\n b: *a\n", "YAML error: Anchor 'a' value contains itself"}, {"a: &a\n b: *a\n", "YAML error: Anchor 'a' value contains itself"},
{"value: -", "YAML error: block sequence entries are not allowed in this context"}, {"value: -", "YAML error: block sequence entries are not allowed in this context"},
{"a: !!binary ==", "YAML error: !!binary value contains invalid base64 data"},
{"{[.]}", `YAML error: invalid map key: \[\]interface \{\}\{"\."\}`},
{"{{.}}", `YAML error: invalid map key: map\[interface\ \{\}\]interface \{\}\{".":interface \{\}\(nil\)\}`},
} }
func (s *S) TestUnmarshalErrors(c *C) { func (s *S) TestUnmarshalErrors(c *C) {
@@ -532,6 +557,23 @@ func (s *S) TestUnmarshalWithFalseSetterIgnoresValue(c *C) {
c.Assert(m["ghi"].value, Equals, 3) c.Assert(m["ghi"].value, Equals, 3)
} }
func (s *S) TestUnmarshalWithTransform(c *C) {
data := `{a_b: 1, c-d: 2, e-f_g: 3, h_i-j: 4}`
expect := map[string]int{
"a_b": 1,
"c_d": 2,
"e_f_g": 3,
"h_i_j": 4,
}
m := map[string]int{}
yaml.UnmarshalMappingKeyTransform = func(i string) string {
return strings.Replace(i, "-", "_", -1)
}
err := yaml.Unmarshal([]byte(data), m)
c.Assert(err, IsNil)
c.Assert(m, DeepEquals, expect)
}
// From http://yaml.org/type/merge.html // From http://yaml.org/type/merge.html
var mergeTests = ` var mergeTests = `
anchors: anchors:
@@ -624,6 +666,30 @@ func (s *S) TestMergeStruct(c *C) {
} }
} }
var unmarshalNullTests = []func() interface{}{
func() interface{} { var v interface{}; v = "v"; return &v },
func() interface{} { var s = "s"; return &s },
func() interface{} { var s = "s"; sptr := &s; return &sptr },
func() interface{} { var i = 1; return &i },
func() interface{} { var i = 1; iptr := &i; return &iptr },
func() interface{} { m := map[string]int{"s": 1}; return &m },
func() interface{} { m := map[string]int{"s": 1}; return m },
}
func (s *S) TestUnmarshalNull(c *C) {
for _, test := range unmarshalNullTests {
item := test()
zero := reflect.Zero(reflect.TypeOf(item).Elem()).Interface()
err := yaml.Unmarshal([]byte("null"), item)
c.Assert(err, IsNil)
if reflect.TypeOf(item).Kind() == reflect.Map {
c.Assert(reflect.ValueOf(item).Interface(), DeepEquals, reflect.MakeMap(reflect.TypeOf(item)).Interface())
} else {
c.Assert(reflect.ValueOf(item).Elem().Interface(), DeepEquals, zero)
}
}
}
//var data []byte //var data []byte
//func init() { //func init() {
// var err error // var err error

View File

@@ -973,8 +973,8 @@ func yaml_emitter_analyze_tag(emitter *yaml_emitter_t, tag []byte) bool {
if bytes.HasPrefix(tag, tag_directive.prefix) { if bytes.HasPrefix(tag, tag_directive.prefix) {
emitter.tag_data.handle = tag_directive.handle emitter.tag_data.handle = tag_directive.handle
emitter.tag_data.suffix = tag[len(tag_directive.prefix):] emitter.tag_data.suffix = tag[len(tag_directive.prefix):]
return true
} }
return true
} }
emitter.tag_data.suffix = tag emitter.tag_data.suffix = tag
return true return true
@@ -1279,6 +1279,9 @@ func yaml_emitter_write_tag_content(emitter *yaml_emitter_t, value []byte, need_
for k := 0; k < w; k++ { for k := 0; k < w; k++ {
octet := value[i] octet := value[i]
i++ i++
if !put(emitter, '%') {
return false
}
c := octet >> 4 c := octet >> 4
if c < 10 { if c < 10 {

View File

@@ -2,8 +2,10 @@ package yaml
import ( import (
"reflect" "reflect"
"regexp"
"sort" "sort"
"strconv" "strconv"
"strings"
"time" "time"
) )
@@ -50,14 +52,19 @@ func (e *encoder) must(ok bool) {
if msg == "" { if msg == "" {
msg = "Unknown problem generating YAML content" msg = "Unknown problem generating YAML content"
} }
panic(msg) fail(msg)
} }
} }
func (e *encoder) marshal(tag string, in reflect.Value) { func (e *encoder) marshal(tag string, in reflect.Value) {
if !in.IsValid() {
e.nilv()
return
}
var value interface{} var value interface{}
if getter, ok := in.Interface().(Getter); ok { if getter, ok := in.Interface().(Getter); ok {
tag, value = getter.GetYAML() tag, value = getter.GetYAML()
tag = longTag(tag)
if value == nil { if value == nil {
e.nilv() e.nilv()
return return
@@ -98,7 +105,7 @@ func (e *encoder) marshal(tag string, in reflect.Value) {
case reflect.Bool: case reflect.Bool:
e.boolv(tag, in) e.boolv(tag, in)
default: default:
panic("Can't marshal type yet: " + in.Type().String()) panic("Can't marshal type: " + in.Type().String())
} }
} }
@@ -167,11 +174,46 @@ func (e *encoder) slicev(tag string, in reflect.Value) {
e.emit() e.emit()
} }
// isBase60 returns whether s is in base 60 notation as defined in YAML 1.1.
//
// The base 60 float notation in YAML 1.1 is a terrible idea and is unsupported
// in YAML 1.2 and by this package, but these should be marshalled quoted for
// the time being for compatibility with other parsers.
func isBase60Float(s string) (result bool) {
// Fast path.
if s == "" {
return false
}
c := s[0]
if !(c == '+' || c == '-' || c >= '0' && c <= '9') || strings.IndexByte(s, ':') < 0 {
return false
}
// Do the full match.
return base60float.MatchString(s)
}
// From http://yaml.org/type/float.html, except the regular expression there
// is bogus. In practice parsers do not enforce the "\.[0-9_]*" suffix.
var base60float = regexp.MustCompile(`^[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+(?:\.[0-9_]*)?$`)
func (e *encoder) stringv(tag string, in reflect.Value) { func (e *encoder) stringv(tag string, in reflect.Value) {
var style yaml_scalar_style_t var style yaml_scalar_style_t
s := in.String() s := in.String()
if rtag, _ := resolve("", s); rtag != "!!str" { rtag, rs := resolve("", s)
if rtag == yaml_BINARY_TAG {
if tag == "" || tag == yaml_STR_TAG {
tag = rtag
s = rs.(string)
} else if tag == yaml_BINARY_TAG {
fail("explicitly tagged !!binary data must be base64-encoded")
} else {
fail("cannot marshal invalid UTF-8 data as " + shortTag(tag))
}
}
if tag == "" && (rtag != yaml_STR_TAG || isBase60Float(s)) {
style = yaml_DOUBLE_QUOTED_SCALAR_STYLE style = yaml_DOUBLE_QUOTED_SCALAR_STYLE
} else if strings.Contains(s, "\n") {
style = yaml_LITERAL_SCALAR_STYLE
} else { } else {
style = yaml_PLAIN_SCALAR_STYLE style = yaml_PLAIN_SCALAR_STYLE
} }
@@ -218,9 +260,6 @@ func (e *encoder) nilv() {
func (e *encoder) emitScalar(value, anchor, tag string, style yaml_scalar_style_t) { func (e *encoder) emitScalar(value, anchor, tag string, style yaml_scalar_style_t) {
implicit := tag == "" implicit := tag == ""
if !implicit {
style = yaml_PLAIN_SCALAR_STYLE
}
e.must(yaml_scalar_event_initialize(&e.event, []byte(anchor), []byte(tag), []byte(value), implicit, implicit, style)) e.must(yaml_scalar_event_initialize(&e.event, []byte(anchor), []byte(tag), []byte(value), implicit, implicit, style))
e.emit() e.emit()
} }

View File

@@ -2,12 +2,13 @@ package yaml_test
import ( import (
"fmt" "fmt"
"gopkg.in/yaml.v1"
. "gopkg.in/check.v1"
"math" "math"
"strconv" "strconv"
"strings" "strings"
"time" "time"
"github.com/coreos/yaml"
. "gopkg.in/check.v1"
) )
var marshalIntTest = 123 var marshalIntTest = 123
@@ -17,6 +18,9 @@ var marshalTests = []struct {
data string data string
}{ }{
{ {
nil,
"null\n",
}, {
&struct{}{}, &struct{}{},
"{}\n", "{}\n",
}, { }, {
@@ -87,7 +91,7 @@ var marshalTests = []struct {
"v:\n- A\n- B\n", "v:\n- A\n- B\n",
}, { }, {
map[string][]string{"v": []string{"A", "B\nC"}}, map[string][]string{"v": []string{"A", "B\nC"}},
"v:\n- A\n- 'B\n\n C'\n", "v:\n- A\n- |-\n B\n C\n",
}, { }, {
map[string][]interface{}{"v": []interface{}{"A", 1, map[string][]int{"B": []int{2, 3}}}}, map[string][]interface{}{"v": []interface{}{"A", 1, map[string][]int{"B": []int{2, 3}}}},
"v:\n- A\n- 1\n- B:\n - 2\n - 3\n", "v:\n- A\n- 1\n- B:\n - 2\n - 3\n",
@@ -220,11 +224,39 @@ var marshalTests = []struct {
"a: 3s\n", "a: 3s\n",
}, },
// Issue #24. // Issue #24: bug in map merging logic.
{ {
map[string]string{"a": "<foo>"}, map[string]string{"a": "<foo>"},
"a: <foo>\n", "a: <foo>\n",
}, },
// Issue #34: marshal unsupported base 60 floats quoted for compatibility
// with old YAML 1.1 parsers.
{
map[string]string{"a": "1:1"},
"a: \"1:1\"\n",
},
// Binary data.
{
map[string]string{"a": "\x00"},
"a: \"\\0\"\n",
}, {
map[string]string{"a": "\x80\x81\x82"},
"a: !!binary gIGC\n",
}, {
map[string]string{"a": strings.Repeat("\x90", 54)},
"a: !!binary |\n " + strings.Repeat("kJCQ", 17) + "kJ\n CQ\n",
}, {
map[string]interface{}{"a": typeWithGetter{"!!str", "\x80\x81\x82"}},
"a: !!binary gIGC\n",
},
// Escaping of tags.
{
map[string]interface{}{"a": typeWithGetter{"foo!bar", 1}},
"a: !<foo%21bar> 1\n",
},
} }
func (s *S) TestMarshal(c *C) { func (s *S) TestMarshal(c *C) {
@@ -238,20 +270,29 @@ func (s *S) TestMarshal(c *C) {
var marshalErrorTests = []struct { var marshalErrorTests = []struct {
value interface{} value interface{}
error string error string
}{ panic string
{ }{{
&struct { value: &struct {
B int B int
inlineB ",inline" inlineB ",inline"
}{1, inlineB{2, inlineC{3}}}, }{1, inlineB{2, inlineC{3}}},
`Duplicated key 'b' in struct struct \{ B int; .*`, panic: `Duplicated key 'b' in struct struct \{ B int; .*`,
}, }, {
} value: typeWithGetter{"!!binary", "\x80"},
error: "YAML error: explicitly tagged !!binary data must be base64-encoded",
}, {
value: typeWithGetter{"!!float", "\x80"},
error: `YAML error: cannot marshal invalid UTF-8 data as !!float`,
}}
func (s *S) TestMarshalErrors(c *C) { func (s *S) TestMarshalErrors(c *C) {
for _, item := range marshalErrorTests { for _, item := range marshalErrorTests {
_, err := yaml.Marshal(item.value) if item.panic != "" {
c.Assert(err, ErrorMatches, item.error) c.Assert(func() { yaml.Marshal(item.value) }, PanicMatches, item.panic)
} else {
_, err := yaml.Marshal(item.value)
c.Assert(err, ErrorMatches, item.error)
}
} }
} }

190
Godeps/_workspace/src/github.com/coreos/yaml/resolve.go generated vendored Normal file
View File

@@ -0,0 +1,190 @@
package yaml
import (
"encoding/base64"
"fmt"
"math"
"strconv"
"strings"
"unicode/utf8"
)
// TODO: merge, timestamps, base 60 floats, omap.
type resolveMapItem struct {
value interface{}
tag string
}
var resolveTable = make([]byte, 256)
var resolveMap = make(map[string]resolveMapItem)
func init() {
t := resolveTable
t[int('+')] = 'S' // Sign
t[int('-')] = 'S'
for _, c := range "0123456789" {
t[int(c)] = 'D' // Digit
}
for _, c := range "yYnNtTfFoO~" {
t[int(c)] = 'M' // In map
}
t[int('.')] = '.' // Float (potentially in map)
var resolveMapList = []struct {
v interface{}
tag string
l []string
}{
{true, yaml_BOOL_TAG, []string{"y", "Y", "yes", "Yes", "YES"}},
{true, yaml_BOOL_TAG, []string{"true", "True", "TRUE"}},
{true, yaml_BOOL_TAG, []string{"on", "On", "ON"}},
{false, yaml_BOOL_TAG, []string{"n", "N", "no", "No", "NO"}},
{false, yaml_BOOL_TAG, []string{"false", "False", "FALSE"}},
{false, yaml_BOOL_TAG, []string{"off", "Off", "OFF"}},
{nil, yaml_NULL_TAG, []string{"", "~", "null", "Null", "NULL"}},
{math.NaN(), yaml_FLOAT_TAG, []string{".nan", ".NaN", ".NAN"}},
{math.Inf(+1), yaml_FLOAT_TAG, []string{".inf", ".Inf", ".INF"}},
{math.Inf(+1), yaml_FLOAT_TAG, []string{"+.inf", "+.Inf", "+.INF"}},
{math.Inf(-1), yaml_FLOAT_TAG, []string{"-.inf", "-.Inf", "-.INF"}},
{"<<", yaml_MERGE_TAG, []string{"<<"}},
}
m := resolveMap
for _, item := range resolveMapList {
for _, s := range item.l {
m[s] = resolveMapItem{item.v, item.tag}
}
}
}
const longTagPrefix = "tag:yaml.org,2002:"
func shortTag(tag string) string {
// TODO This can easily be made faster and produce less garbage.
if strings.HasPrefix(tag, longTagPrefix) {
return "!!" + tag[len(longTagPrefix):]
}
return tag
}
func longTag(tag string) string {
if strings.HasPrefix(tag, "!!") {
return longTagPrefix + tag[2:]
}
return tag
}
func resolvableTag(tag string) bool {
switch tag {
case "", yaml_STR_TAG, yaml_BOOL_TAG, yaml_INT_TAG, yaml_FLOAT_TAG, yaml_NULL_TAG:
return true
}
return false
}
func resolve(tag string, in string) (rtag string, out interface{}) {
if !resolvableTag(tag) {
return tag, in
}
defer func() {
switch tag {
case "", rtag, yaml_STR_TAG, yaml_BINARY_TAG:
return
}
fail(fmt.Sprintf("cannot decode %s `%s` as a %s", shortTag(rtag), in, shortTag(tag)))
}()
// Any data is accepted as a !!str or !!binary.
// Otherwise, the prefix is enough of a hint about what it might be.
hint := byte('N')
if in != "" {
hint = resolveTable[in[0]]
}
if hint != 0 && tag != yaml_STR_TAG && tag != yaml_BINARY_TAG {
// Handle things we can lookup in a map.
if item, ok := resolveMap[in]; ok {
return item.tag, item.value
}
// Base 60 floats are a bad idea, were dropped in YAML 1.2, and
// are purposefully unsupported here. They're still quoted on
// the way out for compatibility with other parser, though.
switch hint {
case 'M':
// We've already checked the map above.
case '.':
// Not in the map, so maybe a normal float.
floatv, err := strconv.ParseFloat(in, 64)
if err == nil {
return yaml_FLOAT_TAG, floatv
}
case 'D', 'S':
// Int, float, or timestamp.
plain := strings.Replace(in, "_", "", -1)
intv, err := strconv.ParseInt(plain, 0, 64)
if err == nil {
if intv == int64(int(intv)) {
return yaml_INT_TAG, int(intv)
} else {
return yaml_INT_TAG, intv
}
}
floatv, err := strconv.ParseFloat(plain, 64)
if err == nil {
return yaml_FLOAT_TAG, floatv
}
if strings.HasPrefix(plain, "0b") {
intv, err := strconv.ParseInt(plain[2:], 2, 64)
if err == nil {
return yaml_INT_TAG, int(intv)
}
} else if strings.HasPrefix(plain, "-0b") {
intv, err := strconv.ParseInt(plain[3:], 2, 64)
if err == nil {
return yaml_INT_TAG, -int(intv)
}
}
// XXX Handle timestamps here.
default:
panic("resolveTable item not yet handled: " + string(rune(hint)) + " (with " + in + ")")
}
}
if tag == yaml_BINARY_TAG {
return yaml_BINARY_TAG, in
}
if utf8.ValidString(in) {
return yaml_STR_TAG, in
}
return yaml_BINARY_TAG, encodeBase64(in)
}
// encodeBase64 encodes s as base64 that is broken up into multiple lines
// as appropriate for the resulting length.
func encodeBase64(s string) string {
const lineLen = 70
encLen := base64.StdEncoding.EncodedLen(len(s))
lines := encLen/lineLen + 1
buf := make([]byte, encLen*2+lines)
in := buf[0:encLen]
out := buf[encLen:]
base64.StdEncoding.Encode(in, []byte(s))
k := 0
for i := 0; i < len(in); i += lineLen {
j := i + lineLen
if j > len(in) {
j = len(in)
}
k += copy(out[k:], in[i:j])
if lines > 1 {
out[k] = '\n'
k++
}
}
return string(out[:k])
}

View File

@@ -10,23 +10,20 @@ import (
"errors" "errors"
"fmt" "fmt"
"reflect" "reflect"
"runtime"
"strings" "strings"
"sync" "sync"
) )
type yamlError string
func fail(msg string) {
panic(yamlError(msg))
}
func handleErr(err *error) { func handleErr(err *error) {
if r := recover(); r != nil { if r := recover(); r != nil {
if _, ok := r.(runtime.Error); ok { if e, ok := r.(yamlError); ok {
panic(r) *err = errors.New("YAML error: " + string(e))
} else if _, ok := r.(*reflect.ValueError); ok {
panic(r)
} else if _, ok := r.(externalPanic); ok {
panic(r)
} else if s, ok := r.(string); ok {
*err = errors.New("YAML error: " + s)
} else if e, ok := r.(error); ok {
*err = e
} else { } else {
panic(r) panic(r)
} }
@@ -78,7 +75,7 @@ type Getter interface {
// F int `yaml:"a,omitempty"` // F int `yaml:"a,omitempty"`
// B int // B int
// } // }
// var T t // var t T
// yaml.Unmarshal([]byte("a: 1\nb: 2"), &t) // yaml.Unmarshal([]byte("a: 1\nb: 2"), &t)
// //
// See the documentation of Marshal for the format of tags and a list of // See the documentation of Marshal for the format of tags and a list of
@@ -87,11 +84,15 @@ type Getter interface {
func Unmarshal(in []byte, out interface{}) (err error) { func Unmarshal(in []byte, out interface{}) (err error) {
defer handleErr(&err) defer handleErr(&err)
d := newDecoder() d := newDecoder()
p := newParser(in) p := newParser(in, UnmarshalMappingKeyTransform)
defer p.destroy() defer p.destroy()
node := p.parse() node := p.parse()
if node != nil { if node != nil {
d.unmarshal(node, reflect.ValueOf(out)) v := reflect.ValueOf(out)
if v.Kind() == reflect.Ptr && !v.IsNil() {
v = v.Elem()
}
d.unmarshal(node, v)
} }
return nil return nil
} }
@@ -145,6 +146,17 @@ func Marshal(in interface{}) (out []byte, err error) {
return return
} }
// UnmarshalMappingKeyTransform is a string transformation that is applied to
// each mapping key in a YAML document before it is unmarshalled. By default,
// UnmarshalMappingKeyTransform is an identity transform (no modification).
var UnmarshalMappingKeyTransform transformString = identityTransform
type transformString func(in string) (out string)
func identityTransform(in string) (out string) {
return in
}
// -------------------------------------------------------------------------- // --------------------------------------------------------------------------
// Maintain a mapping of keys to structure field indexes // Maintain a mapping of keys to structure field indexes
@@ -174,12 +186,6 @@ type fieldInfo struct {
var structMap = make(map[reflect.Type]*structInfo) var structMap = make(map[reflect.Type]*structInfo)
var fieldMapMutex sync.RWMutex var fieldMapMutex sync.RWMutex
type externalPanic string
func (e externalPanic) String() string {
return string(e)
}
func getStructInfo(st reflect.Type) (*structInfo, error) { func getStructInfo(st reflect.Type) (*structInfo, error) {
fieldMapMutex.RLock() fieldMapMutex.RLock()
sinfo, found := structMap[st] sinfo, found := structMap[st]
@@ -220,8 +226,7 @@ func getStructInfo(st reflect.Type) (*structInfo, error) {
case "inline": case "inline":
inline = true inline = true
default: default:
msg := fmt.Sprintf("Unsupported flag %q in tag %q of type %s", flag, tag, st) return nil, errors.New(fmt.Sprintf("Unsupported flag %q in tag %q of type %s", flag, tag, st))
panic(externalPanic(msg))
} }
} }
tag = fields[0] tag = fields[0]
@@ -229,6 +234,7 @@ func getStructInfo(st reflect.Type) (*structInfo, error) {
if inline { if inline {
switch field.Type.Kind() { switch field.Type.Kind() {
// TODO: Implement support for inline maps.
//case reflect.Map: //case reflect.Map:
// if inlineMap >= 0 { // if inlineMap >= 0 {
// return nil, errors.New("Multiple ,inline maps in struct " + st.String()) // return nil, errors.New("Multiple ,inline maps in struct " + st.String())
@@ -256,8 +262,8 @@ func getStructInfo(st reflect.Type) (*structInfo, error) {
fieldsList = append(fieldsList, finfo) fieldsList = append(fieldsList, finfo)
} }
default: default:
//panic("Option ,inline needs a struct value or map field") //return nil, errors.New("Option ,inline needs a struct value or map field")
panic("Option ,inline needs a struct value field") return nil, errors.New("Option ,inline needs a struct value field")
} }
continue continue
} }

View File

@@ -294,6 +294,10 @@ const (
yaml_SEQ_TAG = "tag:yaml.org,2002:seq" // The tag !!seq is used to denote sequences. yaml_SEQ_TAG = "tag:yaml.org,2002:seq" // The tag !!seq is used to denote sequences.
yaml_MAP_TAG = "tag:yaml.org,2002:map" // The tag !!map is used to denote mapping. yaml_MAP_TAG = "tag:yaml.org,2002:map" // The tag !!map is used to denote mapping.
// Not in original libyaml.
yaml_BINARY_TAG = "tag:yaml.org,2002:binary"
yaml_MERGE_TAG = "tag:yaml.org,2002:merge"
yaml_DEFAULT_SCALAR_TAG = yaml_STR_TAG // The default scalar tag is !!str. yaml_DEFAULT_SCALAR_TAG = yaml_STR_TAG // The default scalar tag is !!str.
yaml_DEFAULT_SEQUENCE_TAG = yaml_SEQ_TAG // The default sequence tag is !!seq. yaml_DEFAULT_SEQUENCE_TAG = yaml_SEQ_TAG // The default sequence tag is !!seq.
yaml_DEFAULT_MAPPING_TAG = yaml_MAP_TAG // The default mapping tag is !!map. yaml_DEFAULT_MAPPING_TAG = yaml_MAP_TAG // The default mapping tag is !!map.

View File

@@ -2,7 +2,7 @@ package introspect
import ( import (
"encoding/xml" "encoding/xml"
"github.com/coreos/coreos-cloudinit/third_party/github.com/guelfey/go.dbus" "github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/guelfey/go.dbus"
"strings" "strings"
) )

View File

@@ -2,7 +2,7 @@ package introspect
import ( import (
"encoding/xml" "encoding/xml"
"github.com/coreos/coreos-cloudinit/third_party/github.com/guelfey/go.dbus" "github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/guelfey/go.dbus"
"reflect" "reflect"
) )

View File

@@ -3,8 +3,8 @@
package prop package prop
import ( import (
"github.com/coreos/coreos-cloudinit/third_party/github.com/guelfey/go.dbus" "github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/guelfey/go.dbus"
"github.com/coreos/coreos-cloudinit/third_party/github.com/guelfey/go.dbus/introspect" "github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/guelfey/go.dbus/introspect"
"sync" "sync"
) )

View File

@@ -0,0 +1,61 @@
package serial
import (
"testing"
"time"
)
func TestConnection(t *testing.T) {
c0 := &Config{Name: "/dev/ttyUSB0", Baud: 115200}
c1 := &Config{Name: "/dev/ttyUSB1", Baud: 115200}
s1, err := OpenPort(c0)
if err != nil {
t.Fatal(err)
}
s2, err := OpenPort(c1)
if err != nil {
t.Fatal(err)
}
ch := make(chan int, 1)
go func() {
buf := make([]byte, 128)
var readCount int
for {
n, err := s2.Read(buf)
if err != nil {
t.Fatal(err)
}
readCount++
t.Logf("Read %v %v bytes: % 02x %s", readCount, n, buf[:n], buf[:n])
select {
case <-ch:
ch <- readCount
close(ch)
default:
}
}
}()
if _, err = s1.Write([]byte("hello")); err != nil {
t.Fatal(err)
}
if _, err = s1.Write([]byte(" ")); err != nil {
t.Fatal(err)
}
time.Sleep(time.Second)
if _, err = s1.Write([]byte("world")); err != nil {
t.Fatal(err)
}
time.Sleep(time.Second / 10)
ch <- 0
s1.Write([]byte(" ")) // We could be blocked in the read without this
c := <-ch
exp := 5
if c >= exp {
t.Fatalf("Expected less than %v read, got %v", exp, c)
}
}

154
config/config.go Normal file
View File

@@ -0,0 +1,154 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package config
import (
"fmt"
"reflect"
"regexp"
"strings"
"github.com/coreos/coreos-cloudinit/Godeps/_workspace/src/github.com/coreos/yaml"
)
// CloudConfig encapsulates the entire cloud-config configuration file and maps
// directly to YAML. Fields that cannot be set in the cloud-config (fields
// used for internal use) have the YAML tag '-' so that they aren't marshalled.
type CloudConfig struct {
SSHAuthorizedKeys []string `yaml:"ssh_authorized_keys"`
CoreOS CoreOS `yaml:"coreos"`
WriteFiles []File `yaml:"write_files"`
Hostname string `yaml:"hostname"`
Users []User `yaml:"users"`
ManageEtcHosts EtcHosts `yaml:"manage_etc_hosts"`
}
type CoreOS struct {
Etcd Etcd `yaml:"etcd"`
Flannel Flannel `yaml:"flannel"`
Fleet Fleet `yaml:"fleet"`
Locksmith Locksmith `yaml:"locksmith"`
OEM OEM `yaml:"oem"`
Update Update `yaml:"update"`
Units []Unit `yaml:"units"`
}
func IsCloudConfig(userdata string) bool {
header := strings.SplitN(userdata, "\n", 2)[0]
// Explicitly trim the header so we can handle user-data from
// non-unix operating systems. The rest of the file is parsed
// by yaml, which correctly handles CRLF.
header = strings.TrimSuffix(header, "\r")
return (header == "#cloud-config")
}
// NewCloudConfig instantiates a new CloudConfig from the given contents (a
// string of YAML), returning any error encountered. It will ignore unknown
// fields but log encountering them.
func NewCloudConfig(contents string) (*CloudConfig, error) {
yaml.UnmarshalMappingKeyTransform = func(nameIn string) (nameOut string) {
return strings.Replace(nameIn, "-", "_", -1)
}
var cfg CloudConfig
err := yaml.Unmarshal([]byte(contents), &cfg)
return &cfg, err
}
func (cc CloudConfig) String() string {
bytes, err := yaml.Marshal(cc)
if err != nil {
return ""
}
stringified := string(bytes)
stringified = fmt.Sprintf("#cloud-config\n%s", stringified)
return stringified
}
// IsZero returns whether or not the parameter is the zero value for its type.
// If the parameter is a struct, only the exported fields are considered.
func IsZero(c interface{}) bool {
return isZero(reflect.ValueOf(c))
}
type ErrorValid struct {
Value string
Valid string
Field string
}
func (e ErrorValid) Error() string {
return fmt.Sprintf("invalid value %q for option %q (valid options: %q)", e.Value, e.Field, e.Valid)
}
// AssertStructValid checks the fields in the structure and makes sure that
// they contain valid values as specified by the 'valid' flag. Empty fields are
// implicitly valid.
func AssertStructValid(c interface{}) error {
ct := reflect.TypeOf(c)
cv := reflect.ValueOf(c)
for i := 0; i < ct.NumField(); i++ {
ft := ct.Field(i)
if !isFieldExported(ft) {
continue
}
if err := AssertValid(cv.Field(i), ft.Tag.Get("valid")); err != nil {
err.Field = ft.Name
return err
}
}
return nil
}
// AssertValid checks to make sure that the given value is in the list of
// valid values. Zero values are implicitly valid.
func AssertValid(value reflect.Value, valid string) *ErrorValid {
if valid == "" || isZero(value) {
return nil
}
vs := fmt.Sprintf("%v", value.Interface())
if m, _ := regexp.MatchString(valid, vs); m {
return nil
}
return &ErrorValid{
Value: vs,
Valid: valid,
}
}
func isZero(v reflect.Value) bool {
switch v.Kind() {
case reflect.Struct:
vt := v.Type()
for i := 0; i < v.NumField(); i++ {
if isFieldExported(vt.Field(i)) && !isZero(v.Field(i)) {
return false
}
}
return true
default:
return v.Interface() == reflect.Zero(v.Type()).Interface()
}
}
func isFieldExported(f reflect.StructField) bool {
return f.PkgPath == ""
}

497
config/config_test.go Normal file
View File

@@ -0,0 +1,497 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package config
import (
"reflect"
"regexp"
"strings"
"testing"
)
func TestNewCloudConfig(t *testing.T) {
tests := []struct {
contents string
config CloudConfig
}{
{},
{
contents: "#cloud-config\nwrite_files:\n - path: underscore",
config: CloudConfig{WriteFiles: []File{File{Path: "underscore"}}},
},
{
contents: "#cloud-config\nwrite-files:\n - path: hyphen",
config: CloudConfig{WriteFiles: []File{File{Path: "hyphen"}}},
},
{
contents: "#cloud-config\ncoreos:\n update:\n reboot-strategy: off",
config: CloudConfig{CoreOS: CoreOS{Update: Update{RebootStrategy: "off"}}},
},
{
contents: "#cloud-config\ncoreos:\n update:\n reboot-strategy: false",
config: CloudConfig{CoreOS: CoreOS{Update: Update{RebootStrategy: "false"}}},
},
{
contents: "#cloud-config\nwrite_files:\n - permissions: 0744",
config: CloudConfig{WriteFiles: []File{File{RawFilePermissions: "0744"}}},
},
{
contents: "#cloud-config\nwrite_files:\n - permissions: 744",
config: CloudConfig{WriteFiles: []File{File{RawFilePermissions: "744"}}},
},
{
contents: "#cloud-config\nwrite_files:\n - permissions: '0744'",
config: CloudConfig{WriteFiles: []File{File{RawFilePermissions: "0744"}}},
},
{
contents: "#cloud-config\nwrite_files:\n - permissions: '744'",
config: CloudConfig{WriteFiles: []File{File{RawFilePermissions: "744"}}},
},
}
for i, tt := range tests {
config, err := NewCloudConfig(tt.contents)
if err != nil {
t.Errorf("bad error (test case #%d): want %v, got %s", i, nil, err)
}
if !reflect.DeepEqual(&tt.config, config) {
t.Errorf("bad config (test case #%d): want %#v, got %#v", i, tt.config, config)
}
}
}
func TestIsZero(t *testing.T) {
tests := []struct {
c interface{}
empty bool
}{
{struct{}{}, true},
{struct{ a, b string }{}, true},
{struct{ A, b string }{}, true},
{struct{ A, B string }{}, true},
{struct{ A string }{A: "hello"}, false},
{struct{ A int }{}, true},
{struct{ A int }{A: 1}, false},
}
for _, tt := range tests {
if empty := IsZero(tt.c); tt.empty != empty {
t.Errorf("bad result (%q): want %t, got %t", tt.c, tt.empty, empty)
}
}
}
func TestAssertStructValid(t *testing.T) {
tests := []struct {
c interface{}
err error
}{
{struct{}{}, nil},
{struct {
A, b string `valid:"^1|2$"`
}{}, nil},
{struct {
A, b string `valid:"^1|2$"`
}{A: "1", b: "2"}, nil},
{struct {
A, b string `valid:"^1|2$"`
}{A: "1", b: "hello"}, nil},
{struct {
A, b string `valid:"^1|2$"`
}{A: "hello", b: "2"}, &ErrorValid{Value: "hello", Field: "A", Valid: "^1|2$"}},
{struct {
A, b int `valid:"^1|2$"`
}{}, nil},
{struct {
A, b int `valid:"^1|2$"`
}{A: 1, b: 2}, nil},
{struct {
A, b int `valid:"^1|2$"`
}{A: 1, b: 9}, nil},
{struct {
A, b int `valid:"^1|2$"`
}{A: 9, b: 2}, &ErrorValid{Value: "9", Field: "A", Valid: "^1|2$"}},
}
for _, tt := range tests {
if err := AssertStructValid(tt.c); !reflect.DeepEqual(tt.err, err) {
t.Errorf("bad result (%q): want %q, got %q", tt.c, tt.err, err)
}
}
}
func TestConfigCompile(t *testing.T) {
tests := []interface{}{
Etcd{},
File{},
Flannel{},
Fleet{},
Locksmith{},
OEM{},
Unit{},
Update{},
}
for _, tt := range tests {
ttt := reflect.TypeOf(tt)
for i := 0; i < ttt.NumField(); i++ {
ft := ttt.Field(i)
if !isFieldExported(ft) {
continue
}
if _, err := regexp.Compile(ft.Tag.Get("valid")); err != nil {
t.Errorf("bad regexp(%s.%s): want %v, got %s", ttt.Name(), ft.Name, nil, err)
}
}
}
}
func TestCloudConfigUnknownKeys(t *testing.T) {
contents := `
coreos:
etcd:
discovery: "https://discovery.etcd.io/827c73219eeb2fa5530027c37bf18877"
coreos_unknown:
foo: "bar"
section_unknown:
dunno:
something
bare_unknown:
bar
write_files:
- content: fun
path: /var/party
file_unknown: nofun
users:
- name: fry
passwd: somehash
user_unknown: philip
hostname:
foo
`
cfg, err := NewCloudConfig(contents)
if err != nil {
t.Fatalf("error instantiating CloudConfig with unknown keys: %v", err)
}
if cfg.Hostname != "foo" {
t.Fatalf("hostname not correctly set when invalid keys are present")
}
if cfg.CoreOS.Etcd.Discovery != "https://discovery.etcd.io/827c73219eeb2fa5530027c37bf18877" {
t.Fatalf("etcd section not correctly set when invalid keys are present")
}
if len(cfg.WriteFiles) < 1 || cfg.WriteFiles[0].Content != "fun" || cfg.WriteFiles[0].Path != "/var/party" {
t.Fatalf("write_files section not correctly set when invalid keys are present")
}
if len(cfg.Users) < 1 || cfg.Users[0].Name != "fry" || cfg.Users[0].PasswordHash != "somehash" {
t.Fatalf("users section not correctly set when invalid keys are present")
}
}
// Assert that the parsing of a cloud config file "generally works"
func TestCloudConfigEmpty(t *testing.T) {
cfg, err := NewCloudConfig("")
if err != nil {
t.Fatalf("Encountered unexpected error :%v", err)
}
keys := cfg.SSHAuthorizedKeys
if len(keys) != 0 {
t.Error("Parsed incorrect number of SSH keys")
}
if len(cfg.WriteFiles) != 0 {
t.Error("Expected zero WriteFiles")
}
if cfg.Hostname != "" {
t.Errorf("Expected hostname to be empty, got '%s'", cfg.Hostname)
}
}
// Assert that the parsing of a cloud config file "generally works"
func TestCloudConfig(t *testing.T) {
contents := `
coreos:
etcd:
discovery: "https://discovery.etcd.io/827c73219eeb2fa5530027c37bf18877"
update:
reboot_strategy: reboot
units:
- name: 50-eth0.network
runtime: yes
content: '[Match]
Name=eth47
[Network]
Address=10.209.171.177/19
'
oem:
id: rackspace
name: Rackspace Cloud Servers
version_id: 168.0.0
home_url: https://www.rackspace.com/cloud/servers/
bug_report_url: https://github.com/coreos/coreos-overlay
ssh_authorized_keys:
- foobar
- foobaz
write_files:
- content: |
penny
elroy
path: /etc/dogepack.conf
permissions: '0644'
owner: root:dogepack
hostname: trontastic
`
cfg, err := NewCloudConfig(contents)
if err != nil {
t.Fatalf("Encountered unexpected error :%v", err)
}
keys := cfg.SSHAuthorizedKeys
if len(keys) != 2 {
t.Error("Parsed incorrect number of SSH keys")
} else if keys[0] != "foobar" {
t.Error("Expected first SSH key to be 'foobar'")
} else if keys[1] != "foobaz" {
t.Error("Expected first SSH key to be 'foobaz'")
}
if len(cfg.WriteFiles) != 1 {
t.Error("Failed to parse correct number of write_files")
} else {
wf := cfg.WriteFiles[0]
if wf.Content != "penny\nelroy\n" {
t.Errorf("WriteFile has incorrect contents '%s'", wf.Content)
}
if wf.Encoding != "" {
t.Errorf("WriteFile has incorrect encoding %s", wf.Encoding)
}
if wf.RawFilePermissions != "0644" {
t.Errorf("WriteFile has incorrect permissions %s", wf.RawFilePermissions)
}
if wf.Path != "/etc/dogepack.conf" {
t.Errorf("WriteFile has incorrect path %s", wf.Path)
}
if wf.Owner != "root:dogepack" {
t.Errorf("WriteFile has incorrect owner %s", wf.Owner)
}
}
if len(cfg.CoreOS.Units) != 1 {
t.Error("Failed to parse correct number of units")
} else {
u := cfg.CoreOS.Units[0]
expect := `[Match]
Name=eth47
[Network]
Address=10.209.171.177/19
`
if u.Content != expect {
t.Errorf("Unit has incorrect contents '%s'.\nExpected '%s'.", u.Content, expect)
}
if u.Runtime != true {
t.Errorf("Unit has incorrect runtime value")
}
if u.Name != "50-eth0.network" {
t.Errorf("Unit has incorrect name %s", u.Name)
}
}
if cfg.CoreOS.OEM.ID != "rackspace" {
t.Errorf("Failed parsing coreos.oem. Expected ID 'rackspace', got %q.", cfg.CoreOS.OEM.ID)
}
if cfg.Hostname != "trontastic" {
t.Errorf("Failed to parse hostname")
}
if cfg.CoreOS.Update.RebootStrategy != "reboot" {
t.Errorf("Failed to parse locksmith strategy")
}
}
// Assert that our interface conversion doesn't panic
func TestCloudConfigKeysNotList(t *testing.T) {
contents := `
ssh_authorized_keys:
- foo: bar
`
cfg, err := NewCloudConfig(contents)
if err != nil {
t.Fatalf("Encountered unexpected error: %v", err)
}
keys := cfg.SSHAuthorizedKeys
if len(keys) != 0 {
t.Error("Parsed incorrect number of SSH keys")
}
}
func TestCloudConfigSerializationHeader(t *testing.T) {
cfg, _ := NewCloudConfig("")
contents := cfg.String()
header := strings.SplitN(contents, "\n", 2)[0]
if header != "#cloud-config" {
t.Fatalf("Serialized config did not have expected header")
}
}
func TestCloudConfigUsers(t *testing.T) {
contents := `
users:
- name: elroy
passwd: somehash
ssh_authorized_keys:
- somekey
gecos: arbitrary comment
homedir: /home/place
no_create_home: yes
primary_group: things
groups:
- ping
- pong
no_user_group: true
system: y
no_log_init: True
`
cfg, err := NewCloudConfig(contents)
if err != nil {
t.Fatalf("Encountered unexpected error: %v", err)
}
if len(cfg.Users) != 1 {
t.Fatalf("Parsed %d users, expected 1", len(cfg.Users))
}
user := cfg.Users[0]
if user.Name != "elroy" {
t.Errorf("User name is %q, expected 'elroy'", user.Name)
}
if user.PasswordHash != "somehash" {
t.Errorf("User passwd is %q, expected 'somehash'", user.PasswordHash)
}
if keys := user.SSHAuthorizedKeys; len(keys) != 1 {
t.Errorf("Parsed %d ssh keys, expected 1", len(keys))
} else {
key := user.SSHAuthorizedKeys[0]
if key != "somekey" {
t.Errorf("User SSH key is %q, expected 'somekey'", key)
}
}
if user.GECOS != "arbitrary comment" {
t.Errorf("Failed to parse gecos field, got %q", user.GECOS)
}
if user.Homedir != "/home/place" {
t.Errorf("Failed to parse homedir field, got %q", user.Homedir)
}
if !user.NoCreateHome {
t.Errorf("Failed to parse no_create_home field")
}
if user.PrimaryGroup != "things" {
t.Errorf("Failed to parse primary_group field, got %q", user.PrimaryGroup)
}
if len(user.Groups) != 2 {
t.Errorf("Failed to parse 2 goups, got %d", len(user.Groups))
} else {
if user.Groups[0] != "ping" {
t.Errorf("First group was %q, not expected value 'ping'", user.Groups[0])
}
if user.Groups[1] != "pong" {
t.Errorf("First group was %q, not expected value 'pong'", user.Groups[1])
}
}
if !user.NoUserGroup {
t.Errorf("Failed to parse no_user_group field")
}
if !user.System {
t.Errorf("Failed to parse system field")
}
if !user.NoLogInit {
t.Errorf("Failed to parse no_log_init field")
}
}
func TestCloudConfigUsersGithubUser(t *testing.T) {
contents := `
users:
- name: elroy
coreos_ssh_import_github: bcwaldon
`
cfg, err := NewCloudConfig(contents)
if err != nil {
t.Fatalf("Encountered unexpected error: %v", err)
}
if len(cfg.Users) != 1 {
t.Fatalf("Parsed %d users, expected 1", len(cfg.Users))
}
user := cfg.Users[0]
if user.Name != "elroy" {
t.Errorf("User name is %q, expected 'elroy'", user.Name)
}
if user.SSHImportGithubUser != "bcwaldon" {
t.Errorf("github user is %q, expected 'bcwaldon'", user.SSHImportGithubUser)
}
}
func TestCloudConfigUsersSSHImportURL(t *testing.T) {
contents := `
users:
- name: elroy
coreos_ssh_import_url: https://token:x-auth-token@github.enterprise.com/api/v3/polvi/keys
`
cfg, err := NewCloudConfig(contents)
if err != nil {
t.Fatalf("Encountered unexpected error: %v", err)
}
if len(cfg.Users) != 1 {
t.Fatalf("Parsed %d users, expected 1", len(cfg.Users))
}
user := cfg.Users[0]
if user.Name != "elroy" {
t.Errorf("User name is %q, expected 'elroy'", user.Name)
}
if user.SSHImportURL != "https://token:x-auth-token@github.enterprise.com/api/v3/polvi/keys" {
t.Errorf("ssh import url is %q, expected 'https://token:x-auth-token@github.enterprise.com/api/v3/polvi/keys'", user.SSHImportURL)
}
}

56
config/decode.go Normal file
View File

@@ -0,0 +1,56 @@
package config
import (
"bytes"
"compress/gzip"
"encoding/base64"
"fmt"
)
func DecodeBase64Content(content string) ([]byte, error) {
output, err := base64.StdEncoding.DecodeString(content)
if err != nil {
return nil, fmt.Errorf("Unable to decode base64: %q", err)
}
return output, nil
}
func DecodeGzipContent(content string) ([]byte, error) {
gzr, err := gzip.NewReader(bytes.NewReader([]byte(content)))
if err != nil {
return nil, fmt.Errorf("Unable to decode gzip: %q", err)
}
defer gzr.Close()
buf := new(bytes.Buffer)
buf.ReadFrom(gzr)
return buf.Bytes(), nil
}
func DecodeContent(content string, encoding string) ([]byte, error) {
switch encoding {
case "":
return []byte(content), nil
case "b64", "base64":
return DecodeBase64Content(content)
case "gz", "gzip":
return DecodeGzipContent(content)
case "gz+base64", "gzip+base64", "gz+b64", "gzip+b64":
gz, err := DecodeBase64Content(content)
if err != nil {
return nil, err
}
return DecodeGzipContent(string(gz))
}
return nil, fmt.Errorf("Unsupported encoding %q", encoding)
}

17
config/etc_hosts.go Normal file
View File

@@ -0,0 +1,17 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package config
type EtcHosts string

67
config/etcd.go Normal file
View File

@@ -0,0 +1,67 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package config
type Etcd struct {
Addr string `yaml:"addr" env:"ETCD_ADDR"`
AdvertiseClientURLs string `yaml:"advertise_client_urls" env:"ETCD_ADVERTISE_CLIENT_URLS"`
BindAddr string `yaml:"bind_addr" env:"ETCD_BIND_ADDR"`
CAFile string `yaml:"ca_file" env:"ETCD_CA_FILE"`
CertFile string `yaml:"cert_file" env:"ETCD_CERT_FILE"`
ClusterActiveSize int `yaml:"cluster_active_size" env:"ETCD_CLUSTER_ACTIVE_SIZE"`
ClusterRemoveDelay float64 `yaml:"cluster_remove_delay" env:"ETCD_CLUSTER_REMOVE_DELAY"`
ClusterSyncInterval float64 `yaml:"cluster_sync_interval" env:"ETCD_CLUSTER_SYNC_INTERVAL"`
CorsOrigins string `yaml:"cors" env:"ETCD_CORS"`
DataDir string `yaml:"data_dir" env:"ETCD_DATA_DIR"`
Discovery string `yaml:"discovery" env:"ETCD_DISCOVERY"`
DiscoveryFallback string `yaml:"discovery_fallback" env:"ETCD_DISCOVERY_FALLBACK"`
DiscoverySRV string `yaml:"discovery_srv" env:"ETCD_DISCOVERY_SRV"`
DiscoveryProxy string `yaml:"discovery_proxy" env:"ETCD_DISCOVERY_PROXY"`
ElectionTimeout int `yaml:"election_timeout" env:"ETCD_ELECTION_TIMEOUT"`
ForceNewCluster bool `yaml:"force_new_cluster" env:"ETCD_FORCE_NEW_CLUSTER"`
GraphiteHost string `yaml:"graphite_host" env:"ETCD_GRAPHITE_HOST"`
HeartbeatInterval int `yaml:"heartbeat_interval" env:"ETCD_HEARTBEAT_INTERVAL"`
HTTPReadTimeout float64 `yaml:"http_read_timeout" env:"ETCD_HTTP_READ_TIMEOUT"`
HTTPWriteTimeout float64 `yaml:"http_write_timeout" env:"ETCD_HTTP_WRITE_TIMEOUT"`
InitialAdvertisePeerURLs string `yaml:"initial_advertise_peer_urls" env:"ETCD_INITIAL_ADVERTISE_PEER_URLS"`
InitialCluster string `yaml:"initial_cluster" env:"ETCD_INITIAL_CLUSTER"`
InitialClusterState string `yaml:"initial_cluster_state" env:"ETCD_INITIAL_CLUSTER_STATE"`
InitialClusterToken string `yaml:"initial_cluster_token" env:"ETCD_INITIAL_CLUSTER_TOKEN"`
KeyFile string `yaml:"key_file" env:"ETCD_KEY_FILE"`
ListenClientURLs string `yaml:"listen_client_urls" env:"ETCD_LISTEN_CLIENT_URLS"`
ListenPeerURLs string `yaml:"listen_peer_urls" env:"ETCD_LISTEN_PEER_URLS"`
MaxResultBuffer int `yaml:"max_result_buffer" env:"ETCD_MAX_RESULT_BUFFER"`
MaxRetryAttempts int `yaml:"max_retry_attempts" env:"ETCD_MAX_RETRY_ATTEMPTS"`
MaxSnapshots int `yaml:"max_snapshots" env:"ETCD_MAX_SNAPSHOTS"`
MaxWALs int `yaml:"max_wals" env:"ETCD_MAX_WALS"`
Name string `yaml:"name" env:"ETCD_NAME"`
PeerAddr string `yaml:"peer_addr" env:"ETCD_PEER_ADDR"`
PeerBindAddr string `yaml:"peer_bind_addr" env:"ETCD_PEER_BIND_ADDR"`
PeerCAFile string `yaml:"peer_ca_file" env:"ETCD_PEER_CA_FILE"`
PeerCertFile string `yaml:"peer_cert_file" env:"ETCD_PEER_CERT_FILE"`
PeerElectionTimeout int `yaml:"peer_election_timeout" env:"ETCD_PEER_ELECTION_TIMEOUT"`
PeerHeartbeatInterval int `yaml:"peer_heartbeat_interval" env:"ETCD_PEER_HEARTBEAT_INTERVAL"`
PeerKeyFile string `yaml:"peer_key_file" env:"ETCD_PEER_KEY_FILE"`
Peers string `yaml:"peers" env:"ETCD_PEERS"`
PeersFile string `yaml:"peers_file" env:"ETCD_PEERS_FILE"`
Proxy string `yaml:"proxy" env:"ETCD_PROXY"`
RetryInterval float64 `yaml:"retry_interval" env:"ETCD_RETRY_INTERVAL"`
Snapshot bool `yaml:"snapshot" env:"ETCD_SNAPSHOT"`
SnapshotCount int `yaml:"snapshot_count" env:"ETCD_SNAPSHOTCOUNT"`
StrTrace string `yaml:"trace" env:"ETCD_TRACE"`
Verbose bool `yaml:"verbose" env:"ETCD_VERBOSE"`
VeryVerbose bool `yaml:"very_verbose" env:"ETCD_VERY_VERBOSE"`
VeryVeryVerbose bool `yaml:"very_very_verbose" env:"ETCD_VERY_VERY_VERBOSE"`
}

23
config/file.go Normal file
View File

@@ -0,0 +1,23 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package config
type File struct {
Encoding string `yaml:"encoding" valid:"^(base64|b64|gz|gzip|gz\\+base64|gzip\\+base64|gz\\+b64|gzip\\+b64)$"`
Content string `yaml:"content"`
Owner string `yaml:"owner"`
Path string `yaml:"path"`
RawFilePermissions string `yaml:"permissions" valid:"^0?[0-7]{3,4}$"`
}

71
config/file_test.go Normal file
View File

@@ -0,0 +1,71 @@
/*
Copyright 2014 CoreOS, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config
import (
"testing"
)
func TestEncodingValid(t *testing.T) {
tests := []struct {
value string
isValid bool
}{
{value: "base64", isValid: true},
{value: "b64", isValid: true},
{value: "gz", isValid: true},
{value: "gzip", isValid: true},
{value: "gz+base64", isValid: true},
{value: "gzip+base64", isValid: true},
{value: "gz+b64", isValid: true},
{value: "gzip+b64", isValid: true},
{value: "gzzzzbase64", isValid: false},
{value: "gzipppbase64", isValid: false},
{value: "unknown", isValid: false},
}
for _, tt := range tests {
isValid := (nil == AssertStructValid(File{Encoding: tt.value}))
if tt.isValid != isValid {
t.Errorf("bad assert (%s): want %t, got %t", tt.value, tt.isValid, isValid)
}
}
}
func TestRawFilePermissionsValid(t *testing.T) {
tests := []struct {
value string
isValid bool
}{
{value: "744", isValid: true},
{value: "0744", isValid: true},
{value: "1744", isValid: true},
{value: "01744", isValid: true},
{value: "11744", isValid: false},
{value: "rwxr--r--", isValid: false},
{value: "800", isValid: false},
}
for _, tt := range tests {
isValid := (nil == AssertStructValid(File{RawFilePermissions: tt.value}))
if tt.isValid != isValid {
t.Errorf("bad assert (%s): want %t, got %t", tt.value, tt.isValid, isValid)
}
}
}

26
config/flannel.go Normal file
View File

@@ -0,0 +1,26 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package config
type Flannel struct {
EtcdEndpoints string `yaml:"etcd_endpoints" env:"FLANNELD_ETCD_ENDPOINTS"`
EtcdCAFile string `yaml:"etcd_cafile" env:"FLANNELD_ETCD_CAFILE"`
EtcdCertFile string `yaml:"etcd_certfile" env:"FLANNELD_ETCD_CERTFILE"`
EtcdKeyFile string `yaml:"etcd_keyfile" env:"FLANNELD_ETCD_KEYFILE"`
EtcdPrefix string `yaml:"etcd_prefix" env:"FLANNELD_ETCD_PREFIX"`
IPMasq string `yaml:"ip_masq" env:"FLANNELD_IP_MASQ"`
SubnetFile string `yaml:"subnet_file" env:"FLANNELD_SUBNET_FILE"`
Iface string `yaml:"interface" env:"FLANNELD_IFACE"`
}

29
config/fleet.go Normal file
View File

@@ -0,0 +1,29 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package config
type Fleet struct {
AgentTTL string `yaml:"agent_ttl" env:"FLEET_AGENT_TTL"`
EngineReconcileInterval float64 `yaml:"engine_reconcile_interval" env:"FLEET_ENGINE_RECONCILE_INTERVAL"`
EtcdCAFile string `yaml:"etcd_cafile" env:"FLEET_ETCD_CAFILE"`
EtcdCertFile string `yaml:"etcd_certfile" env:"FLEET_ETCD_CERTFILE"`
EtcdKeyFile string `yaml:"etcd_keyfile" env:"FLEET_ETCD_KEYFILE"`
EtcdKeyPrefix string `yaml:"etcd_key_prefix" env:"FLEET_ETCD_KEY_PREFIX"`
EtcdRequestTimeout float64 `yaml:"etcd_request_timeout" env:"FLEET_ETCD_REQUEST_TIMEOUT"`
EtcdServers string `yaml:"etcd_servers" env:"FLEET_ETCD_SERVERS"`
Metadata string `yaml:"metadata" env:"FLEET_METADATA"`
PublicIP string `yaml:"public_ip" env:"FLEET_PUBLIC_IP"`
Verbosity int `yaml:"verbosity" env:"FLEET_VERBOSITY"`
}

22
config/locksmith.go Normal file
View File

@@ -0,0 +1,22 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package config
type Locksmith struct {
Endpoint string `yaml:"endpoint" env:"LOCKSMITHD_ENDPOINT"`
EtcdCAFile string `yaml:"etcd_cafile" env:"LOCKSMITHD_ETCD_CAFILE"`
EtcdCertFile string `yaml:"etcd_certfile" env:"LOCKSMITHD_ETCD_CERTFILE"`
EtcdKeyFile string `yaml:"etcd_keyfile" env:"LOCKSMITHD_ETCD_KEYFILE"`
}

23
config/oem.go Normal file
View File

@@ -0,0 +1,23 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package config
type OEM struct {
ID string `yaml:"id"`
Name string `yaml:"name"`
VersionID string `yaml:"version_id"`
HomeURL string `yaml:"home_url"`
BugReportURL string `yaml:"bug_report_url"`
}

31
config/script.go Normal file
View File

@@ -0,0 +1,31 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package config
import (
"strings"
)
type Script []byte
func IsScript(userdata string) bool {
header := strings.SplitN(userdata, "\n", 2)[0]
return strings.HasPrefix(header, "#!")
}
func NewScript(userdata string) (*Script, error) {
s := Script(userdata)
return &s, nil
}

30
config/unit.go Normal file
View File

@@ -0,0 +1,30 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package config
type Unit struct {
Name string `yaml:"name"`
Mask bool `yaml:"mask"`
Enable bool `yaml:"enable"`
Runtime bool `yaml:"runtime"`
Content string `yaml:"content"`
Command string `yaml:"command" valid:"^(start|stop|restart|reload|try-restart|reload-or-restart|reload-or-try-restart)$"`
DropIns []UnitDropIn `yaml:"drop_ins"`
}
type UnitDropIn struct {
Name string `yaml:"name"`
Content string `yaml:"content"`
}

46
config/unit_test.go Normal file
View File

@@ -0,0 +1,46 @@
/*
Copyright 2014 CoreOS, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config
import (
"testing"
)
func TestCommandValid(t *testing.T) {
tests := []struct {
value string
isValid bool
}{
{value: "start", isValid: true},
{value: "stop", isValid: true},
{value: "restart", isValid: true},
{value: "reload", isValid: true},
{value: "try-restart", isValid: true},
{value: "reload-or-restart", isValid: true},
{value: "reload-or-try-restart", isValid: true},
{value: "tryrestart", isValid: false},
{value: "unknown", isValid: false},
}
for _, tt := range tests {
isValid := (nil == AssertStructValid(Unit{Command: tt.value}))
if tt.isValid != isValid {
t.Errorf("bad assert (%s): want %t, got %t", tt.value, tt.isValid, isValid)
}
}
}

21
config/update.go Normal file
View File

@@ -0,0 +1,21 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package config
type Update struct {
RebootStrategy string `yaml:"reboot_strategy" env:"REBOOT_STRATEGY" valid:"^(best-effort|etcd-lock|reboot|off)$"`
Group string `yaml:"group" env:"GROUP"`
Server string `yaml:"server" env:"SERVER"`
}

43
config/update_test.go Normal file
View File

@@ -0,0 +1,43 @@
/*
Copyright 2014 CoreOS, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config
import (
"testing"
)
func TestRebootStrategyValid(t *testing.T) {
tests := []struct {
value string
isValid bool
}{
{value: "best-effort", isValid: true},
{value: "etcd-lock", isValid: true},
{value: "reboot", isValid: true},
{value: "off", isValid: true},
{value: "besteffort", isValid: false},
{value: "unknown", isValid: false},
}
for _, tt := range tests {
isValid := (nil == AssertStructValid(Update{RebootStrategy: tt.value}))
if tt.isValid != isValid {
t.Errorf("bad assert (%s): want %t, got %t", tt.value, tt.isValid, isValid)
}
}
}

Some files were not shown because too many files have changed in this diff Show More