z, ? | toggle help (this) |
space, → | next slide |
shift-space, ← | previous slide |
d | toggle debug mode |
## <ret> | go to slide # |
c, t | table of contents (vi) |
f | toggle footer |
r | reload slides |
n | toggle notes |
p | run preshow |
@xlson
@villesv
At the core (of agility), different kinds
getting metrics out of your team?
in fact, how well do you know your product?
feedback from users, ops, testing?
Once upon a time at
consultants, working as developer and tester
We wanted to settle concerns about stability and performance for a new application about to be integrated in a legacy environment
Subject kept popping up, much labor to settle it (to our satisfaction) every time From the heart of the application Lack "depth" Lack reliability/repeatability Involve manual labor Provide slow feedback MBeans/JConsole => No mining, unreliable Cron/CSV/Excel => Hard to repeat, much work, error prone
Half a day trying it out, Three simple/relevant metrics at the end of the day, Data avalailable => pursue further!
an immediate side effect
shared understanding and language?
used in testing, verified improvements, helped proof the prod env.
Monitoring proj req. by PO/Ops, Access to load testing for a long time, curious team
Easy as setting up a python webapp, All OpenSource, Made to scale
Graphite does not do it for you, but it is really easy
(Python)
import time
import socket
def collect_metric(name, value, timestamp):
sock = socket.socket()
sock.connect( ("localhost", 2003) )
sock.send("%s %d %d\n" % (name, value, timestamp))
sock.close()
def now():
int(time.time())
collect_metric("meaning.of.life", 42, now())
(Clojure)
(import [java.net Socket]
[java.io PrintWriter]))
(defn write-metric [name value timestamp]
(with-open [socket (Socket. "localhost" 2003)
os (.getOutputStream socket)]
(binding [*out* (PrintWriter. os)]
(println name value timestamp))))
(defn now []
(int (/ (System/currentTimeMillis) 1000)))
(write-metric "meaning.of.life" 42 (now))
[ demo time ]
Apply a function, nonNegativeDerivative, talk about composability, how different data can be combined easily
Watching changes take effect immediately
Availability over http => huge benefit
Historic data for comparison and regression
need for metrics, notice missing info, eagerness to retest/deploy, experimentation
easily spot trends/errors, don't panic, spotted prod. HW problems
outspoken within the team, outside (graph as fact), history as reference
at least one feature not impl., metrics show when/where improvement needed (or not)
data as reference when "composing" is easy!, regressions using historic data
start in development, silos, ready for prod.
Nurturing conversation with data
(with data as guide)
-Oh what is that? Quite ready yet? Shared decision making! what influences perf., associate names with data
changing procedures, investment in infrastructure
We started with a monitoring tool, we got so much more
ville.svard@agical.com | @villesv
leo@xlson.com | @xlson
Big Brother Noc Concerns Blunt tools Gather round Thirst Confidence Behaviour Design Testing Start monitoring Demo Time
The team at Entraction (IGT)
Mårten Gustafson