Puppet coding

From Wikitech
Jump to: navigation, search

This page is about writing puppet code: how to write it, when to write it, where to put it. For information about how to install or manage puppet, visit this page.

Contents

Getting the source

As simple as

git clone --recursive https://gerrit.wikimedia.org/r/p/operations/puppet

When we use puppet

Puppet is our configuration management system. Anything related to the configuration files & state of a server should be puppetized. There are a few cases where configurations are deployed into systems without involving puppet but these are the exceptions rather than the rule (MediaWiki configuration etc.); all package installs and configurations should happen via puppet in order to ensure peer review and reproducibility.

However, Puppet is not being used as a deployment system at Wikimedia. Pushing code via puppet, e.g. with the define git::clone should be avoided. Depending on the case, Debian packages or the use of our deployment system (Scap3) should be employed. Deploying software via Scap3 can be achieved in puppet by using the puppet class scap::target.

labs

Labs users have considerable freedom in the configuration of their systems, and the state of labs machines is frequently not puppetized. Specific projects (e.g. toollabs) often have their own systems for maintaining code and configuration.

That said, any labs system being used for semi-production purposes (e.g. public test sites, bot hosts, etc.) should be fully puppetized. Labs users should always be ready for their instance to vanish at a moment's notice, and have a plan to reproduce the same functionality on a new instance -- generally this is accomplished using puppet.

The node definition for a labs machine is not stored in manifests/site.pp -- it is configured via OpenStack Horizon user interface and stored in a Labs-specific database.

Organization

As of December 2016, we decided[1] to adopt our own variation of the role/profile pattern that is pretty common in puppet coding nowadays. Please note that existing code might not respect this convention, but any new code should definitely follow this model.

The code should be organized in modules, profiles and roles, where

  1. Modules should be basic units of functionality (e.g. "set up, configure and run HHVM")
  2. Profiles are collection of resources from modules that represent a high-level functionality  ("a webserver able to serve mediawiki"),
  3. Roles represent a collective function of one class of servers (e.g. "A mediawiki appserver for the API cluster")
  4. Any node declaration must only include one role, invoked with the role function. No exceptions to this rule. If you need to include two roles in a node, that means that's another role including the two.

Let's see more in detail what rules apply to each of this logical divisions.

Modules

Modules should represent basic units of functionality and should be mostly general-purpose and reusable across very different environments. Rules regarding organizing code in modules are simple enough:

  1. Any class, define or resource in a module must not  use classes from other modules, and avoid, wherever possible, to use defines from other modules as well.
  2. No hiera call, explicit or implicit, should happen within a module.

These rules will ensure the amount of WMF-specific code that makes it into modules is minimal, and improve debugging/refactoring/testing of modules as they don't really depend on each other. Keeping up with the HHVM module example, the base class hhvm is a good example of what should be in a module, while hhvm::admin is a good example of what should not be in a module; surely not in this one: it is a class to configure apache to forward requests to HHVM, depends mostly on another module (apache) and also adds ferm rules, which of course requires the WMF-specific network::constants class.

Profiles

Profiles are the classes where resources from modules are collected together, organized and configured. There are several rules on how to write a profile, specifically:

  1. Profile classes should only have parameters that default to an explicit hiera calls with no fallback value.
  2. No hiera call should be made outside of said parameters.
  3. No resource should be added to a profile using the include class method, but with explicit class instantiations. Only very specific exceptions are allowed, like global classes like the network::constants class.
  4. If a profile needs another one as a precondition, it must be listed with a require ::profile::foo at the start of the class, but profile cross-dependencies should be mostly avoided.
  5. If needed, any system::role definition should go in profiles.

Most of what we used to call "roles" at the WMF are in fact profiles. Following our example, an apache webserver that proxies to a local HHVM installation should be configured via a profile::hhvm::webproxy class; a mediawiki installation served through such a webserver should be configured via a profile::mediawiki::web class.

Roles

Roles are the abstraction describing a class of servers, and:

  1. Roles must only include profiles via the include keyword.
  2. A role can include more than one profile, but no conditionals, hiera calls, etc are allowed.
  3. Inheritance can be used between roles, but it is strongly discouraged: for instance, it should be remembered that inheritance will not work in hiera.
  4. All roles should include the standard profile

Following our example, we should have a role::mediawiki::web that just includes profile::mediawiki::web and profile::hhvm::webproxy.

Hiera

Hiera is a powerful tool to decouple data from code in puppet, but as we saw transitioning to its use, it's not without its dangers and it can easily become an untangled mess. To make it easier to understand and debug, the following rules apply when using it:

  1. No class parameter autolookup is allowed, ever. This means you should explicitly declare any variable as a parameter of the profile classes, and passed along explicitly to the classes/defines within the profile.
  2. Hiera calls can only be defined in profiles, as default values of class parameters.
  3. All hiera definitions for such parameters should be defined in the role hierarchy. Only exceptions can be shared data structures that can be used by many profiles or to feed different modules with their data. Those should go in the common/$site global hierarchy. A good example is, in our codebase, ganglia_clusters, or any list of servers (the memcached hosts for mediawiki, for example).
  4. Per-host hiera should only be used to allow tweaking some knobs for testing or to maybe declare canaries in a cluster. It should not be used to add/subtract functionality. If you need to do that, add a new role and configure it accordingly, within its own hiera namespace.
  5. Hiera keys must reflect the most specific common shared namespace of the puppet classes trying to look them up. This should allow easier grepping and avoid conflicts. Global variables should be avoided as much as possible. This means that a parameter specific to a profile will have its namespace (say profile::hhvm::webproxy::fcgi_settings), while things shared among all profiles for a technology should be at the smallest common level (say the common settings for any hhvm install, profile::hhvm::common_settings), and finally global variables should have no namespace (the ganglia_clusters example above) and use only snake case and no semicolons.

Nodes in site.pp

Our traditional node definitions included a lot of global variables and boilerplate that needed to be added. Nowadays, your node definitions should just include a simple one-liner:

node 'redis01.codfw.wmnet' {
    role(db::redis)
}

this will include the class role::db::redis and look for role-related hiera configs in hieradata/role/codfw/db/redis.yaml and then in hieradata/role/common/db/redis.yaml, which may for example be:

cluster: redis
profile::admin::groups:
  - redis-admins

A working example: deployment host

Say we want to set up a deployment host role with the following things:

  • A scap 3 master, which includes the scap configuration and a simple http server
  • A docker build and deploy environment, which will need the kubernetes client cli tool
So we will want to have a simple module that installs and does the basic scap setup:
# Class docs, see below for the format
class scap3 (
    $config = {}
  ) {
    require_package('python-scap3', 'git')
    scap3::config { 'main':
        config => $config,
    }
}
as you can see, the only resource referenced is coming from the same module. In order for a master to work, we also need apache set up and firewall rules to be created. This is going to be a profile: we are defining one specific unit of functionality, the scap master.
# Class docs, as usual
class profile::scap3::master (
    $scap_config = hiera('profile::scap3::base_config'), # This might be shared with other profiles
    $server_name = hiera('profile::scap3::master::server_name'),
    $max_post_size = hiera('profile::scap3::master::max_post_size'),
    $mediawiki_proxies = hiera('scap_mediawiki_proxies'), # This is a global list
) {
    system::role { 'scap master': }
    class { 'scap3':
        config => merge({ 'server_name' => $server_name}, $scap_config),
    }
    
    # Set up apache
    apache::conf { 'Max_post_size':
        content => "LimitRequestBody ${max_post_size}",
    }
    
    apache::site { 'scap_master':
        content => template('profile/scap/scap_master.conf.erb'),
        ...
    }
    # Firewalling
    ferm::service { 'scap_http':
        proto => 'tcp',
        port  => $scap_config['http_port'],
    }
    
    # Monitoring
    monitoring::service { 'scap_http':
        command => "check_http!${server_name}!/"
    }
}
For the docker building environment, you will probably want to set up a specific profile, and then set one up for the docker deployment environment. The latter actually depends on the former in order to work. Assuming we already have a docker module that helps set install docker and set it up on a server, and that we have a kubernetes module that has a class for installing the cli tools, we can first create a profile for the build environment:
class profile::docker::builder(
    $proxy_address = hiera('profile::docker::builder::proxy_address'),
    $proxy_port = hiera('profile::docker::builder::proxy_port'),
    $registry = hiera('docker_registry'), # this is a global variable again
){
    system::role { 'role::docker::builder':
        description => 'Docker images builder'
    }
    
    # Let's suppose we have a simple class setting up docker, with no params
    # to override in this case
    class { 'docker':
    }

    # We will need docker baseimages
    class { 'docker::baseimages':
        docker_registry => $registry,
        proxy_address   => $proxy_address,
        proxy_port      => $proxy_port,
        distributions   => ['jessie'],
    }

    # we will need some build scripts; they belong here
    file { '/usr/local/bin/build-in-docker':
        source => 'puppet:///modules/profile/docker/builder/build-in-docker.sh',
        ...
    }
    ...
    # Monitoring goes here, not in the docker class
    nrpe::monitor_systemd_unit_state { 'docker-engine': }
}
and then the deployment profile will need to add credentials for the docker registry for uploading, the kubernetes cli tools, and a deploy script. It can't work if the profile::docker::builder::profile is not included, though
class profile::docker::deployment_server (
    $registry = hiera('docker_registry')
    $registry_credentials = hiera('profile::docker::registry_credentials'),
) {
    # Require a profile needed by this one
    require ::profile::docker::builder
    # auth config
    file { '/root/.docker/config.json':
    ...
    }
    class { 'kubernetes::cli':
    }
    # Kubernetes-based deployment script
    require_package('kubernetes-deploy')
}
Then the role class for the whole deployment server will just include the relevant profiles:
class role::deployment_server {
    # Standard is a profile, to all effects
    include standard
    include profile::scap3::master
    include profile::docker::deployment_server
}
and in site.pp
node /deployment.*/ {
    role('deployment_server')
}
Configuration will be done via hiera; most of the definitions will go into the role hierarchy, so in this case: hieradata/role/common/deployment_server.yaml
profile::scap3::master::server_name: "deployment.%{::site}.wmnet"
# This could be shared with other roles and thus defined in a common hierarchy in some cases.
profile::scap3::base_config:
    use_proxies: yes
    deployment_dir: "/srv" 
profile::scap3::master::max_post_size: "100M"
profile::docker::builder::proxy_address: 'http://webproxy.eqiad.wmnet:3128'
...
some other definitions are global, and they might go in the common hierarchy, so: hieradata/common.yaml
docker_registry: 'https://registry.wikimedia.org'
mediawiki_scap_proxies:
    - mw1121.eqiad.wmnet
    - mw2212.eqiad.wmnet
while others can be shared between multiple profiles (in this case, note the 'private' prefix as this is supposed to be a secret) hieradata/private/common/profile/docker.yaml
profile::docker::registry_credentials: "some_secret"

WMF Design conventions

  • Always include the 'base' class for every node (note that standard includes base and should be used in most cases)
  • For every service deployed, please use a system::role definition (defined in modules/system/manifests/role.pp) to indicate what a server is running. This will be put in the MOTD. As the definition name, you should normally use the relevant puppet class. For example:
system::role { "role::cache::bits": description => "bits Varnish cache server" }
  • Files that are fully deployed by Puppet using the file type, should generally use a read-only file mode (i.e., 0444 or 0555). This makes it more obvious that this file should not be modified, as Puppet will overwrite it anyway.
  • For each service, create a nested class with the name profile::service::monitoring (e.g. profile::squid::monitoring) which sets up any required (Nagios) monitoring configuration on the monitoring server.
  • Any top-level class definitions should be documented with descriptive header, like this:
 # Mediawiki_singlnode: A one-step class for setting up a single-node MediaWiki install,
 #  running from a Git tree.
 #
 #  Roles can insert additional lines into LocalSettings.php via the
 #  $role_requires and $role_config_lines vars.
 #  etc.

Such descriptions are especially important for role classes. Comments like these are used to generate our online puppet documentation.

Coding Style

Please read the upstream style-guide. And install puppet-lint.

format

Many existing manifests use two-spaces (as suggested in the style guide) instead of our 4 space indent standard; when working on existing code always follow the existing whitespace style of the file or module you are editing. Please do not mix cleanup changes with functional changes in a single patch.

Spacing, Indentation, & Whitespace

  • Must use four-space soft tabs.
  • Must not use literal tab characters.
  • Must not contain trailing white space
  • Must align fat comma arrows (=>) within blocks of attributes.

Quoting

  • Must use single quotes unless interpolating variables.
  • All variables should be enclosed in in braces ({}) when being interpolated in a string. like this
  • Variables standing by themselves should not be quoted.like this
  • Must not quote booleans: true is ok, but not 'true' or "true"

Resources

  • Must single quote all resource names and their attribute, except ensure. (unless they contain a variable, of course).
  • Ensure must always be the first attribute.
  • Put a trailing comma after the final resource parameter.
  • Again: Must align fat comma arrows (=>) within blocks of attributes.
  • Don't group resources of the same type (a.k.a compression) :

do

file { '/etc/default/exim4':
    require => Package['exim4-config'],
    owner   => 'root',
    group   => 'root',
    mode    => '0444',
    content => template('exim/exim4.default.erb'),
}
file { '/etc/exim4/aliases/':
    ensure  => directory,
    require => Package['exim4-config'],
    mode    => '0755',
    owner   => 'root',
    group   => 'root',
}

don't do

file { '/etc/default/exim4':
    require => Package['exim4-config'],
    owner   => 'root', 
    group   => 'root', 
    mode    => '0444', 
    content => template('exim/exim4.default.erb');
'/etc/exim4/aliases/':
    ensure  => directory,
    require => Package['exim4-config'],
    mode    => '0755', 
    owner   => 'root', 
    group   => 'root', 
}
  • keep the resource name and the resource type on the same line. No need for extra indentation.

Conditionals

  • Don't use selectors inside resources:

good:

$file_mode = $::operatingsystem ? {
    debian => '0007',
    redhat => '0776',
    fedora => '0007',
}
file { '/tmp/readme.txt':
    content => "Hello World\n",
    mode    => $file_mode,
}

bad:

file { '/tmp/readme.txt':
    mode => $::operatingsystem ? {
        debian => '0777',
        redhat => '0776',
        fedora => '0007',
    }
}
  • Case statements should have default cases. like this.

Classes

All classes and resource type definitions must be in separate files in the manifests directory of their module.

  • Do not nest classes.
  • NEVER EVER use inheritance, puppet is not good at that. Also, inheritance will make your life harder when you need to use hiera - really, don't.
  • Try not to use top-space variables, but if you do use them, scope them correctly:
$::operatingsystem

but not

$operatingsystem
  • Do not use dashes in class names, preferably use alpha-betaic names only.
  • In parameterized class and defined resource type declarations, parameters that are required should be listed before optional parameters. like this.
  • It is in general better to avoid parameters that don't have a default; that will only make your life harder as you need to define that variable for every host that includes it.

Including

  • One include per line.
  • One class per include.
  • Include only the class you need, not the entire scope.

Useful global variables

These are useful variables you can refer to from anywhere in the Puppet manifests. Most of these get defined in realm.pp or base.pp.

$::realm 
The "realm" the system belongs to. As of July 2013 we have the realms production, fundraising and labs.
$::site 
Contains the 5-letter site name of the server, e.g. "pmtpa", "eqiad" or "esams".

Before you submit a patch

Parser validation

You can syntax check your changes by

# puppet parser validate filename-here

Lint

You can locally install puppet-lint and use it to check your code before submitting, or enhance existing code by fixing puppet-lint errors/warnings. puppet-lint on github

install puppet-lint

on Debian/Ubuntu
apt-get install puppet-lint  [1], [2]
on Mac OS X
sudo gem install puppet-lint
generic (Ruby gem)
gem install puppet-lint

how to use

puppet-lint manifest.pp

or

puppet-lint --with-filename /etc/puppet/modules

common errors

tab character found on line ..

FIX: do not use tabs, use 4-space soft tabs

Have this in your vim config (.vimrc):

set tabstop=4
set shiftwidth=4
set softtabstop=4
set smarttab
set expandtab

or use something like this (if your local path is ./wmf/puppet/) to apply it to puppet files only.

" Wikimedia style uses 4 spaces for indentation
autocmd BufRead */wmf/puppet/* set sw=4 ts=4 et

open the file, :retab, :wq, done. Make sure to review the resulting change carefully before submitting it.

Or put this in your emacs config (.emacs)

;; Puppet config with 4 spaces
(setq puppet-indent-level 4)
(setq puppet-include-indent 4)
double quoted string containing no variables

FIX: use single quotes (') for all strings unless there are variables to parse in it

unquoted file mode

FIX: always quote file modes with single quotes,like: mode => '0750'

line has more than 80 characters

FIX: wrap your lines to be less than 80 chars, if you have to, there is \<newline>. Vim can help when writing. Place this in your .vimrc

set textwidth=80
not in autoload module layout

FIX: turn your code into a puppet module (Module Fundamentals)

ensure found on line but it's not the first attribute

FIX: move your "ensure =>" to the top of the resource section. (don't forget to turn a ; into a , if it was the last attribute before)

unquoted resource title

FIX: quote all resource titles, single quotes

top-scope variable being used without an explicit namespace

FIX: use an explicit namespace in variable names (Scope and Puppet)

class defined inside a class

FIX: don't define classes inside classes

quoted boolean value

FIX: do NOT quote boolean values ( => true/ => false)

case statement without a default case

FIX: add a default case to your case statement

Ignoring lint warnings and errors

If you want you can always ignore specific warnings/errors by surrounding the relevant lines with a special "lint::ignore"-comment. For example, this would ignore "WARNING: top-scope variable being used without an explicit namespace" on line 54.

 53         # lint:ignore:variable_scope
 54         default     => $fqdn
 55         # lint:endignore

You can get all the names of the separate checks with puppet-lint --help. More info is on http://puppet-lint.com/controlcomments/

Find out which warnings/errors are currently ignored

There are 2 parts to this. The first is to check the global .puppet-lint.rc file in the root of the puppet repo. You will see that the following 4 checks are currently ignored globally:

The second part is checking for individual ignores, find these with a grep -r "lint:ignore" * in the root of the puppet repo. You can help by fixing and removing any of these remaining issues.

The tracking task for getting this to perfection is https://phabricator.wikimedia.org/T93645.

labs testing

Nontrivial puppet changes should be applies to a labs instance before being merged into production. This can uncover some behaviors and code interactions that don't appear during individual file tests -- for example, puppet runs frequently fail due to duplicate resource definitions that aren't obvious to the naked eye.

To test a puppet patch:

1. Create a self-hosted puppetmaster instance.

2. Configure that instance so that it defines the class you're working on. You can do this either via the 'configure instance' page or by editing /var/lib/git/operations/puppet/manifests/site.pp to contain something like this:

   node this-is-my-hostname {
       include class::I::am::working::on
   }

3. Run puppet a couple of times ('$ sudo puppetd -tv') until each subsequent puppet run is clean doesn't modify anything

4. Apply your patch to /var/lib/git/operations/puppet. Do this by cherry-picking from gerrit or by rsyncing from a local working directory.

5. Run puppet again, and note the changes that this puppet run makes. Does puppet succeed? Are the changes what you expected?

Manually Module testing

A relatively simple and crude testing way is

puppet apply --noop --modulepath /path/to/modules <manifest>.pp

Do note however that this might not work if you reference stuff outside of the module hierarchy

You can get around the missing module hierarchy problem by cloning a local copy of the puppet repo and symlinking in your new module directory.

eg.

git clone --branch production https://gerrit.wikimedia.org/r/p/operations/puppet.git
cd puppet/modules
ln -s /path/to/mymodule .
puppet apply --verbose --noop --modulepath=/home/${USER}/puppet/modules /path/to/mymodule/manifest/init.pp


Unit Testing modules

Rake tests

Some modules have unit tests already written -- the other modules still need them! Modules with tests have 'rakefile' in their top dir, and a subdir called 'spec'.

Modules imported from upstream typically have tests that run against other upstream modules -- we, however, seek to have tests pass when running only against our own repository. In order to set that up you'll need to run

 $ rake spec

once in the top level directory to get things set up properly. After that you can test individual modules by running

 $ rake spec_standalone

in specific module subdirs.

If you want to compare your results with the test results on our official testing box, check out this page: https://integration.wikimedia.org/ci/job/test-puppet-rspec/


Custom tests

If you are testing a module it makes sense to group these simple tests in a tests/ directory in your modules hierarchy. Your tests can be as simple as

#

include myclass

in a file called myclass.pp in the tests directory.

or a lot more complex, calling parameterized classes and definitions.

All this can also be automated by including in the tests directory the following Makefile

MANIFESTS=$(wildcard *.pp)
OBJS=$(MANIFESTS:.pp=.po)
TESTS_DIR=$(dir $(CURDIR))
MODULE_DIR=$(TESTS_DIR:/=)
MODULES_DIR=$(dir $(MODULE_DIR))

all:	test

test:	$(OBJS)

%.po:	%.pp
	puppet parser validate $<
	puppet apply --noop --modulepath $(MODULES_DIR) $<

and running make from the command line (assuming you have make installed)

Please note that --noop does not mean that no code will be executed. It means puppet will not change the state of any resource. So at least exec resources' conditionals as well puppet parser functions and facts will execute normally. So don't go around testing untrusted code.

Puppet Modules

There are currently two high level types of modules. For most things, modules should not contain anything that is specific to the Wikimedia Foundation. Non WMF specific modules could be useable an other puppet repository at any other organization. A WMF specific module is different: it may contain configurations specific to WMF (duh), but remember that it is still a module, so it must be useable on its own as well. Users of either type of module should be able able to use the module without editing anything inside of the module. WMF specific modules will probably be higher level abstractions of services that use and depend on other modules, but they may not refer to anything inside of the top level manifests/ directory. E.g. the 'applicationserver' module abstracts usages of apache, php and pybal to set up a WMF application server.

Often it will be difficult to choose between creating role classes and creating a WMF specific module. There isn't a hard rule on this. You should use your best judgement. If role classes start to get overly complicated, you might consider creating a WMF specific module instead.

3rd party or upstream modules

There are so many great modules out there! Why spend time writing your own?!

Well, for good reasons. Puppet runs as root on the production nodes. We can't import just any 3rd party module, as we can't be sure to trust them. Not because they would do something malicious (although they might), but because they might do something stupid.

All 3rd party modules must be reviewed in the same manner that we review our own code before it goes to production.

git submodules

Even so, since puppet modules are supposed to be their own projects, it is sometimes improper to maintain them as subdirectories inside of the operations/puppet repository. This goes for 3rd party modules as well as non-WMF specific modules. WMF specific modules can and probably should remain as subdirectories inside of operations/puppet.

We are starting to use git submodules to manage puppet modules. Puppet modules must go through the same review process as anything in operations/puppet.

Adding a new puppet module as a git submodule

First up is adding a new puppet module. 'git submodule add' will modify the .gitmodules file, and also take care of cloning the remote into the local directory you specify.

 otto@localhost:~/puppet# git submodule add https://gerrit.wikimedia.org/r/p/operations/puppet/<my-module> modules/<my-module>

git status shows the modified .gitmodules file, as well as a new 'file' at modules/<my-module>. This new file is a pointer to a specific commit in the <my-module> repository.

 otto@localhost:~/puppet# git status
 # On branch master
 # Changes to be committed:
 #   (use "git reset HEAD <file>..." to unstage)
 #
 #	modified:   .gitmodules
 #	new file:   modules/<my-module>
 #

Commit the changes and post them to gerrit for review.

 otto@localhost:~/puppet# git commit && git review

This will show up as a change in the operations/puppet repository. This change will not show the actual code that is added, but instead, only show diffs with the SHA1s that the new submodule points at. Once this change has been reviewed, approved, and merge, those with operations/puppet checked out will have to run

 git submodule update --init

to be sure that they get the changes required in their local working copies. This is only really necessary if other users want to view the submodule's content locally.

Making changes to a submodule

You should never edit a submodule directly in the subdirectory a operations/puppet working copy. If you want to make changes to a submodule, clone that submodule elsewhere directly. Edit there and submit changes for review.

 git clone https://gerrit.wikimedia.org/r/p/operations/puppet/<my-module>
 cd <my-module>
 # edit stuff, push to gerrit for review.
 git commit && git review

Once your module change has been approved, you can update your operations/puppet working copy so that it points to the SHA1 you want it to.

 cd path/to/operations/puppet/modules/<my-module>
 git pull # or whatever you want to checkout your desired SHA1.
 cd ../..
 git commit .gitmodules && git review

This will push the update of the new SHA1 to operations/puppet for review.

Miscellaneous

VIM guidelines

The following in ~/.vim/ftdetect/puppet.vim can help with a lot of formatting errors

" detect puppet filetype
autocmd BufRead,BufNewFile *.pp set filetype=puppet
autocmd BufRead,BufNewFile *.pp setlocal tabstop=4 shiftwidth=4 softtabstop=4 expandtab textwidth=80 smarttab

And for a proper syntax hightlighting the following can be done

sudo aptitude install vim-puppet
mkdir -p ~/.vim/syntax
cp /usr/share/vim/addons/syntax/puppet.vim ~/.vim/syntax/

And definitely have a look at the vim plugin https://github.com/scrooloose/syntastic which will report puppet errors directly in your buffer whenever you save the file (works for python/php etc as well).

Of course symlinks can be used or you can just install vim-addon-manager to manage plugins. vim-puppet provides ftplugin and indent plugin as well. Maybe there are worth the time, but it is up to each user to decide.

Emacs guidelines

Syntax Highlighting

The puppet-el deb package can be used for emacs syntax highlighting, or the raw emacs libraries can be found here.

puppet-el - syntax highlighting for puppet manifests in emacs

The following two sections can be added to a .emacs file to help with 4 space indentions and trailing whitespace.

;; Puppet config with 4 spaces
(setq puppet-indent-level 4)
(setq puppet-include-indent 4)

;; Remove Trailing Whitespace on Save
(add-hook 'before-save-hook 'delete-trailing-whitespace)

Puppet for Labs

There is currently only one puppet repository, and it is applied both to labs instances and production instances. Classes may be added to the Operations/Puppet repository that are only intended for use on a labs instance. The future is uncertain, though: code is often reused, and labs services are sometimes promoted to production. For that reason, any changes made to Operations/Puppet must be held to the same style and security standards as code that would be applied on a production service.

Packages that use pip, gem, etc.

Other than mediawiki and related extensions, any software installed by puppet should be a Debian package and should come either from the WMF apt repo or from an official upstream Ubuntu repo. Never use source-based packaging systems like pip or gem as these options haven't been properly evaluated and approved as secure by WMF Operations staff.

References

  1. https://phabricator.wikimedia.org/T147718