Exercise for the Reader

July 5, 2010

Server Monitoring, Revisited

Filed under: Computing Infrastructure — Seth Porter @ 12:35 pm

In which we belatedly celebrate a year’s worth of baseline data; entertain an aside into intrusion detection via passive measurements; resolve some of the Mystery of the 12V Deviations; and conclude with daydreams of better sensors and monitoring beyond the box itself.

A Year of Data

A few months ago, I hit the milestone of a full year’s worth of data collected by my server monitoring tools. I started a writeup at the time, which I’m finally revisiting in hopes of actually posting it. (I’ve also learned that I seem to be capturing exactly a year of historical data, which is unfortunate; I’d perhaps naively assumed that the powers-of-two backoff in storage frequency would give me an infinite time window, albeit eventually degrading to a single sample for the oldest time slice.)

First, a quick overview; here is a year’s worth of smoothed temperature data (the average-per-interval is plotted, whereas my usual “is something wrong” daily charts are plotting the maximum in a given interval).

Smoothed 1 Year Temperatures

The temperature curve is about what you’d predict for a computer that lives in a poorly-insulated room in Pittsburgh. I lost my outside temperature correlation data (RRDWeather gave up on me a while ago – maybe weather.com changed their policies, or just their web service API, and I never really did anything beyond sending an e-mail to the developer). However, I’m quite confident that they’re well aligned; anyone who lived here this year can pick out, say, that late warm spell we had at the end of November. The one thing that concerns me a little is the decreasing separation between the board and the drive temps; that makes me wonder if I’ve obstructed airflows a little (leading to more indirect heating of the drives). Still and all, there are no major deviations here, and a year later I’m basically in the same temp bracket that I was last July, which is somewhat soothing (and the whole point of establishing this baseline data).

One of the ongoing puzzles from this server-monitoring data has been the voltage data. It has a variety of odd characteristics, including daily and seasonal cycles, as well as a fair amount of apparent noise. I had thought they’d quieted down a lot when I installed a uninterruptible power supply (with supply-voltage smoothing). I still think that’s true for most of the lines, particularly the 3.3V and 1.5V rails, which are actually providing power to the chips. However, there’s still a significant variation on the 5V and 12V rails. Looking at this data for the past month, you can clearly see a daily cycle on the 12V rail through Weeks 24 and 25:

Voltages June-July 2010

In the chart here, you can also see a “reboot discontinuity” at the beginning of week 23; from memory, I think this was a kernel update.

An Aside Into Passive Monitoring

The occasional almost-vertical drops in voltage (noticeable in this chart at the reboot and the end of week 26) are strongly correlated with system usage (for instance, the CPU utilization chart); a co-worker has suggested that this would be an interesting approach to intrusion detection. I might get more into this in a later post, but history suggests otherwise, so please tolerate a brief excursion to look at another set of charts, in this case the history of the DNS daemon running on the server. First, the last 28 hours of data:

DNS Data - Last 28 Hours

I haven’t spent as much time exploring the reporting mechanism for this chart, so things don’t add up as nicely as some of the others, but the intent here is that the thin blue line records the number of requests, which is then partitioned into success or failure cases. I’m not sure if the open gap (between the explained results and the number of queries) is due to failures other than NXDomain (non-existent domain), or if this is simply an artifact of differing time scales and reporting methods. I’m also not sure about the sampling time frame – this is obviously “queries per time period”, but without knowing the granularity, the absolute scale (area under the curve / total queries per hour) is meaningless. However, the basic view of local network activity is pretty compelling, as long as interpreted strictly in relative terms.

For a moment of background: Calliope, the server, performs both DHCP and DNS duties. Except for a couple of local hosts, DNS is simply forward and cached from Verizon’s DNS provider. The interesting bit here is that my local server provides DNS with zero “time to live”, so clients will immediately ask again if they need to resolve the same host repeatedly. This means that the record of DNS activity on the server is a reasonably accurate record of internet usage across all hosts in my network, even the uninstrumented ones like the PlayStation etc.

In this chart, you can see a midnight spike, probably primarily cron jobs and the like. More interestingly, you can see my flurry of surfing in late afternoon, trying to figure out whether the T was a viable way to see the city fireworks, followed by several dark hours when we were off watching things go boom. Then this morning you can see machines coming on for the first time (checking for updates, etc), a period of quiet coffee drinking (lately I’m reading “The Worst Journey in the World”, rather than surfing, over coffee), and then a steady rumble of activity as I fired up my main box and started working on this post.

The weekly and monthly charts have similar stories to tell, but not much additional interest; overall activity tends to peak on the weekends, and during the week there’s a definite shift to activity later in the day (when I get home from work), but this is all predictable (though useful from an intrusion-detection point of view). Before returning to the earlier question of voltage excursions, I’ll just post one more DNS chart, this time showing the past year’s activity:

DNS Data - Past Year

For the most part, the patterns have disappeared into noise, though there’s still a definite periodicity; I wonder if my cycle of amusements (computer vs books vs television, and so forth) is really that predictable? There’s also a spike around Christmas, from shopping and travel planning, and an interesting nadir in March, which I think I can explain: this was a period of high intensity at work, and just before I started working on a consulting side-project (which explains the above-average usage for the following months). In any event, I find this data kind of interesting, though the real fun would be in correlating it against a diary or other activity measures to see if it’s really a valid proxy for how much time I spend on the computer.

Back to the Voltage Fluctuation

Anyway, returning to the voltage dilemma, we saw a month’s worth of data previously; here is the data for the full year:

Motherboard Voltages: Past Year

It’s not the easiest chart to read, I admit. In all cases the plots are deviations from “nominal voltages”. Most of the traces are really rock solid, varying only a few millivolts if that. However, there’s clearly something going on with the 5V and 12V lines: they vary together, and they have a clear seasonal component. That pattern from October and November is particularly interesting, if you remember that both those months had sharp cold spikes followed by surprisingly mild weather.

I’ve probably given away the game by now, but I’ll walk through it anyway. Since statistics are pretty tricky on these datasets (the data points are actually averages over varying intervals, and I suspect that does evil things to the normal distribution assumption), I decided to satisfy my curiousity with a visual correlation instead. To do this, I plotted a key temperature curve (reported motherboard temperature) on the same graph as the 5V and 12V deviation plots, then inverted the scale on the temperature (so valleys are the highest temperatures) and iteratively adjusted constant and scale until they lay in the same general region. (I probably could have also just inverted temp and thrown it onto a secondary axis, but this was easier.) As a result, the vertical units for the voltage lines are millivolts of deviation from their spec voltages, and for the temperature they’re completely synthetic units. Note that the chart is kind of mis-titled: for the reasons noted above (“I’m not good at stats”), this is not a direct plot of correlation, but simply a superposition of the two datasets (suitably massaged). We already know that the 12V line and the 5V line are pretty well aligned, so look for how well the green line tracks the pattern established by the voltages:

Temperature / Voltage Analysis - Past Year

Looking at this chart, as has been clearly foreshadowed, we have a strong “correlation” over the course of the year. My explanation for the basic pattern here is the system fans. There are both 5V and 12V fans in the system. Since the server is relatively underutilized (only accessible by me and my wife, and typically for light duty), the usage of these fans is the dominant change in power usage over time.

There is clearly some overshoot in both directions, in warmest days of summer and the coldest days of winter. I invoke two explanations for this. In summer, these are NOT independent measures – the fans are working to cool the motherboard. I would infer that temperatures were actually higher in August than July, but the fans kicked in to keep the motherboard temperature roughly constant across those two months.

This rationale doesn’t seem to work to explain the winter overshoots, since the fans only work to push the temperatures in one direction. There are two scenarios here. One is that we over-compensate with heating when it’s coldest outside; this generates a noisy graph on a yearly scale (since the temperatures are varying widely over the course of the day, and the fan load correspondingly). A second explanation would be that the fans are essentially idle through the winter, and as a result they cease to be the dominant driving force on particularly the 12V line. This allows the daily noise of varying utilization to show through, instead of being drowned in the all-day-long load of the fans.

Unfortunately, I lack the data to disambiguate these last two scenarios. Most of the fans are controlled by an off-board fan controller (sitting in a drive bay, but not providing instrumentation data back to the host computer). The BIOS-controlled fans should report their speed, but I’ve never been able to get good data out of them; I suspect that at heart the problem is that this motherboard came from a pre-packaged HP “media computer”, and they may have saved a few pennies by cutting out some of the monitoring tool support. At some point I’ll upgrade my dev box, and transplant its rather more enthusiast motherboard to the server. That’ll be a setup chore to be sure, but perhaps I’ll be able to get some direct information. In the meantime, having such a long baseline allows me to be pretty confident in my indirect conclusions. (Note that this is the whole point of this post: if I point-sampled this data at any time throughout the year, I really wouldn’t be able to draw any conclusions at all. However, a year’s worth of data tells a pretty compelling story, even without formal stats to back it up.)

Sampling Outside the Box: 1-Wire and Friends

I have some longer-range thoughts or daydreams, time and money permitting (primarily time). I spent quite a while looking for a fan controller which ”would” report its sensor readings to the host. Apparently they exist, but the only examples I was able to find that really fit my needs were these beasts from Matrix Orbital. While extremely in their own way, I really don’t need an OLED screen displaying weather etc on my server box, when the whole point is that I don’t see or hear the server anyway. However, some of the tech specs on this monster mentioned that it can gather and report data from any “1-wire” sensor source. That set me off on a whole other trail.

It turns out that “1-wire” is a Texas Instruments physical and logical protocol for simple, low-power sensor networks. It looks to be primarily designed for industrial applications, but there is a hobbyist community as well (particularly for weather stations, it seems, although I found a fascinating story of a buried moisture sensor driving an automated sprinkler system – now that’s gardening! [Update: Okay, I think the hobby-built CO2 sensor beats the automated sprinklers; no link because I noticed belatedly that it’s deep in an “Advanced Marijuana Cultivation” forum – I’m still bemused by how Google ends up cross-cutting conversations!]). There are USB “host adapters” available, as well as a pretty wide range of sensor units (and raw adapters which simply sample a voltage and report it) – the net result seems to be that you can monitor pretty much anything you want, if you obey the topology and run length constraints and don’t mind writing some software to interrogate and record.

So my new dream is to expand my coverage significantly. This project started as a way to monitor the health of my server (and particularly to spot dead fans) without being able to see or hear it. However, there’s no reason I can’t use the same infrastructure to monitor our living space as a whole. I’d love to track ambient temperatures in various parts of the house, to see where the insulation is failing (and prove out my intuition that it really is ”cold” in the dead space under my computer desk). Weather metrics are less near and dear to my heart, but it could be amusing to have a windspeed sensor or track rainfall. Moisture sensors in the basement could give a approximation of Home Heartbeat, without the vendor lock-in and high prices. I haven’t found a good AC sensor yet, but it seems like it would be possible to do some approximation of the “Smart Grid” idea (realtime electricity-usage tracking) without having to wait on Duke Light to figure it out. Even further down the road (and after building some trust in the system), it could be combined with an X-10 network for real closed-loop environmental control – far better than simply putting everything on a timer, or “home automation” which requires a user at a web browser to actually do anything. Before I dive into anything that big (and potentially annoying if done wrong), I’ll probably start with simpler closed-loop systems. For example, there’s no reason why my nightly cron tasks can’t wait for system idle, instead of running at midnight – that’s when I get some of my best work done! Some small-scale experiments with this sort of feedback loop would teach me a lot about how to control yoyoing and all the other pitfalls of feedback-based machine control, without annoying my wife.

December 5, 2009

B-Splines: Better Math

Filed under: 3D Graphics, Numerical methods — Seth Porter @ 9:13 pm

In the previous three parts of this series, we’d sorted out why B-splines are interesting curves, attempted to sort out B-splines terminology, and decided that it would be really nice to be able to evaluation them with uniform functions. I’d tried some quick and dirty approaches to force it to be, and had gotten little in response. It was time to actually do some algebra and figure out what was going on.

A Note About Tools

That means there are going to be a lot of LaTex-created images in this section. Sorry, but it’s a lot less painful than trying to write out math in raw HTML. (Okay, writing this inspired me to search for LaTex to MathML translators, and of course the W3 itself maintains quite a nice list. Oh well, I promise I’ll look into that for next time.) In the meantime, I’m using the wiki built into Trac to write this post, with the TracMath plugin to take inline LaTeX blocks and render them as PNGs. This means that I just type

  T_{k+i} - v\right)\,N_{i+1,k-1}(v)}\over{T_{k+i}-T_{i+1}}} +
  {{\left(v-T_{i}\right)\,N_{i,k-1}(v)}\over{T_{k+i-1}-T_{i} }}$$

and I get

Sample LaTex equation

Well, okay, I don’t write that, because I can’t remember LaTeX math syntax to save my life. (I got pretty good at it in college, but it’s been a while.) Anyway, it’s hard to parse by eye, and I’m very prone to swap a u for a v, which can be unfortunate when you’ve just carefully drawn a distinction between them.

Instead, I’m using Maxima, a free Computer Algebra System. It’s stranger to pick up than Mathematica, but the price is right (and it’s quite powerful once you get used to its ways). It seems to be really flaky on recent Ubuntu releases, which is unfortunate, but I’m primarily using 5.17.0 for Windows and it works like a champ. It has a very nice tex function, which will spit out the TeX formatting for any given expression.

So in practice, I work the math in Maxima, have it convert to TeX, then paste that into the post. (Sometimes I tweak it slightly, for example to pull out a leading “1/2” rather than leaving the whole expression over 2 as Maxima seems to prefer. But for the most part, the expressions in the text are directly out of Maxima.) I have some ideas of writing a Trac plugin to close the loop and directly evaluate and typeset Maxima expressions in the text, but I haven’t got it all worked out yet: the equations in a piece generally build on each other, so you need enough lifecycle awareness to start and maintain a separate Maxima state for each page in flight. Also you probably need some non-displayed helper definitions and intermediate values, so you need a separate bit of syntax to indicate what should be rendered into the page. I think the fact of the existence of the PageOutlineMacro means it’s possible, but the fact that the PageOutlineMacro actually re-parses the whole page means it’s not necessarily easy. (I’d also still have to solve creating and maintaining enough state that the various individual blocks could fetch their expression / formatted results from the initial gather-and-run macro.) Anyway, for the moment I’ll do it the hard way.

Back to the Math

I’m going to run through a concrete k=3 (quadratic) example with 6 control points (so n=5), and claim that this demonstrates a general solution for k=3. I’m pretty sure it does, but it’s a lot easier to keep track of the indexes with a concrete number. Or so I claim.

So, the knot vector T should be the sequence [0, 0, 0, 1, 2, 3, 4, 4, 4]. Recall that indices are zero-based, and that Ni,1 is defined as

N_(i,1)(v) = (if t_i

Given this, the Ni,1 terms will start being interesting at i=2, where N2,1= 1 for 0 <= u < 1. From then on, each Ni,1 is non-zero on Ni,1= 1 for i-2 <= u < i-2+1, up through the last value we need at N5,1. Using Maxima to evaluate the raw B-spline equation

for our knot vector T (again, this is the k=3 case), we get a whomping huge expression in terms of the control points P, the Ni,1 “switches”, and of course v. I won’t reproduce the whole thing here (fully expanded, there are 33 terms, and even my new wider columns can’t handle that), but here’s the coefficient of the second control point P1 for flavor:

Explicit Equations for Each Segment

The good thing is that we don’t ever really need the whole equation at once. Only one of the Ni,1 terms will be 1 at any given time — this is what produces the curve segments. By setting N2,1 through N5,1 to 1 in turn (and the other Ni,1 terms to 0) we can produce the four curve segments. In turn, these are

Explicit B-spline segments

Maxima Script

For anyone who’s following along at home, here’s the Maxima to get our results so far:

N(N1, T, i, k, v) := if k = 1 then N1[i] else
(if T[i+k-1] = T[i] then 0 else (v- T[i])* N(N1, T, i, k-1, v) / (T[i+k-1] - T[i]))
(if T[i+k] = T[i+1] then 0 else (T[i+k] - v) * N(N1, T, i+1, k-1, v) / (T[i+k] - T[i+1]));

BSpline(P, N1, T, n, k, v) := sum(P[i] * N(N1, T, i, k, v), i, 0, n);
T5Vals : [0, 0, 0, 1, 2, 3, 4, 4, 4];
T5[i] := T5Vals[i+1];

BSpline5_3(v) = BSpline(P, N1, T5, 5, 3, v);

makeN1Helper(curveSeg, n, k) := makelist(if j - k + 3 = curveSeg then 1 else 0, j, 0, n-1);
BSpline5_3_Seg(seg, P, u) := BSpline(P, makeN1Helper(seg, 5, 3), T5, 5, 3, u);
for seg : 1 step 1 thru 4 do disp(
  segment[seg] = facsum(ratsimp(
    BSpline5_3_Seg(seg, P, v)), P[seg-1], P[seg], P[seg+1]));

Note that Maxima lists are one-based, hence the helper function T5 to preserve the zero-based indexing in the original definitions. Since Ni,0 is never referenced, I can get away with a list starting at index 1 (though I admit this is made more confusing by the choice of a zero-based loop variable. Oops.) Also note that BSpline5_3 wasn’t actually defined as a function there — that was just an equality, to make Maxima print it out nicely. Other than that, this is pretty much what it looks like. Oh, and notice that the recursive N definition has to avoid a couple of divide by zero cases to implement the “where 0/0 is defined to be 0” escape hatch.


Okay, those equations work but they look pretty ugly (and they’re different in each segment, but we’ll work on that in a minute). It took me a while to sort this out, but I sort of tipped my hand back in the “Definitions” post where I introduce w and j, so I’ll just repeat the equation from there: v = u (n-k+2) = j+w-1. Remember that j is the segment number ranging from 1 to n-k+2, which is exactly the subscript of segment in the explicit equations above. So we substitute w-1 in the first equation, 1+w-1 in the second and so forth. Using the Maxima definitions above, this is just evaluating the BSpline5_3_Seg function above at seg+w-1 rather than v:

for seg : 1 step 1 thru 4 do disp(
  segment[seg] = facsum(ratsimp(
    BSpline5_3_Seg(seg, P, seg+w-1)), P[seg-1], P[seg], P[seg+1]));

(the facsum bit just rearranges the terms) yielding

Reparameterized B-spline segments

Now we’re getting somewhere. The middle two equations are identical except for shifted subscripts on P. The first two and last two are different, but we can work that out in a minute. First let’s generalize the uniform version and check our claim that

Uniform B-spline for k=3

works for those middle terms. This will also give us a peek at the what’s going on with the end segments.

B3[uniform](P, j, w) := (-P[j]*(2*w^2-2*w-1)+P[j+1]*w^2+P[j-1]*(w-1)^2)/2;
for seg : 1 step 1 thru 4 do disp(
  difference[seg] = ratsimp(
    BSpline5_3_Seg(seg, P, seg+w-1) - B3[uniform](P, seg, w))


B-spline segment differences from uniform

What we’re computing here is the difference between the “real” equation for each of the four segments and what we get from using the uniform equation for all of the segments (ignoring the segment number).

Doesn’t look immediately promising, but there are two good things:

  • the two middle segments are as predicted by the uniform function above
  • the two end segments’ differences only involve two control points

We know that P1 and P4 are used in the middle two segments as well as the end segments, so we can’t do anything to them. However, the two end points (P0 and P5) are only used in the end segments. So what it we feed some different value (not the current P0, but some combination of P0 and P1) into the uniform equation? Can we make the result be the same as evaluating the “real” B-spline function on the original control points?

In the uniform version of segment 1, we replace P0 with a new variable x, then set it equal to the real first segment of the k=3 B-spline:

Segment 1 with p0 as free variable

We then solve for x hoping we can eliminate all the w terms:

rightAnswer : ratsimp(BSpline5_3_Seg(1, P, 1+w-1));
uniformWithFreeP0 : ratsubst(x, P[0], B3[uniform](P, 1, w));
solve(uniformWithFreeP0 = rightAnswer, x);

and we get (drumroll, please)


(Okay, that didn’t really need a LaTeX block, but it really did take me quite a while to figure out, and I was inexplicably happy when I found it (literally: one very really reason for this whole series is to explain to my wife why I was grinning so much that week). If I’d happened to have run into that formula up front I would have spared myself a great deal of frustrating math trying to figure out how to make sense of what I was seeing. Of course, I also wouldn’t have learned a lot about splines, so maybe it was better this way.)

Repeating the process for the last control point (in this case P5), we get x = 2 P5-P4:

rightAnswer2 : ratsimp(BSpline5_3_Seg(4, P, 4+w-1));
uniform2WithFreeP0 : ratsubst(x, P[5], B3[uniform](P, 4, w));
solve(uniform2WithFreeP0 = rightAnswer2, x);

This is a reassuring result — the equations are all symmetric, so it should be possible to reverse the control points and produce the same curve. Therefore, it’s nice to see that this formulation for the end-points is also symmetric.

So the practical upshot of this is that we now have a way to evaluate all the segments of a B-spline, even the “weird” end segments, with the same uniform equation. To do this, we have to massage the control points a little. Given the input sequence

( P0 , P1 , … , Pn-1 , Pn )

we need to make a copy, so we can feed the uniform function the sequence

( 2 P0 – P1 , P1 , … , Pn-1 , 2 Pn – Pn-1 )

instead. That doesn’t seem like much of a price to pay for freedom (or at least for being able to evaluate all segments of all k=3 B-splines curves with the same straight-line evaluation code).

The same general approach works for k=4 curves (cubic B-splines). Because the cubic blending functions involve more points in each segment, we have to replace both P0 and P1 with free variables, and solve for them simultaneously (using the known equations for the first two segments, instead of just the first segment), but it’s the same logic and it works quite nicely. I haven’t tried to generalize a formula for any value of k, mostly because quadratic and cubic B-splines are far more common than other types.

This took much longer to post than I’d planned, but I hope it was worth the wait for someone. I can absolutely guarantee that this isn’t an original result, but I haven’t actually seen it in Wikipedia or MathWorld or the usual suspects, so I feel entitled to be slightly smug.

November 2, 2009

The Stylesheet Blues

Filed under: Uncategorized — Seth Porter @ 9:01 am

Previous visitors might notice a new look. The fixed width column just wasn’t working for me, and the math looked really hideous when is was scaled into 400 pixels.

So I’m trying out a new “theme”, as a quick-and-dirty way to avoid having to actually, you know, design a visual look for this site. Which would be nice, but unfortunately said theme apparently doesn’t handle definition lists (<dl>, <dd> and <dt>), which are my recent favorite way to avoid dealing with tables. (I stole this affection from a blog post I read sometime, but since it no longer appears in the top page of Google results for “definition list”, I’m deeming it an orphan work and mine for the appropriation.)

Anyway, it looks like there is some CSS work in my future, probably mining this site pretty heavily. (I continue to be delighted to live in a world where you can search on any given HTML tag and be pretty much guaranteed to find a nicely organized page of CSS techniques, rants against rampant misuse (generally a good source for ideas you hadn’t yet thought of), and generally a rich literature for those of us in the “I care about CSS for exactly as long as it takes to get my font sizes consistent” camp).)

Given my frequency of updates, you can therefore look forward to many months of inappropriately-sized definitions and other offenses against visual design.

(I actually do have another article in the B-spline Series, wherein I get to look all clever (had I not previously tipped my hand about my various confusions) by presenting a rather nice uniform version of k=3 and k=4 B-splines based on preconditioning the control points. There will be more LaTeX math! More empirically-derived constants which I pass off as proven! It’ll be great.)

November 1, 2009

An Aside About Uniformity

Filed under: 3D Graphics — Seth Porter @ 4:51 pm

Before getting into a correct treatment of the problem, and deriving a correct way to handle these differing end segments (as part of the B-spline series of posts), I should explain what I mean by “uniform” and why I’m so hung up on it. (That is, other than the fact that the Buniform function is an easily-understood mixing of three control points, while the recursive definition of the basis functions makes my head hurt.) In particular, why not just accept a solution along the lines of:

function evaluateBSplineWithConditional(u) =
v = u * (n-k+2)
segment = floor(v)
w = fractional_part(v)
if(segment <= 1) then
  do weird thing(segment, w)
  do uniform thing(segment, w)

(I may have got the indexing slightly wrong, but I think this works (and is a good illustration of why I was being so pedantic about u, v, and w above — it’s not complex math, but it’s easy to mess up the parameterization).) Also note that I’m only looking at the weirdness at the beginning of the curve; properly the conditional would need to check whether the segment was at either end of the curve. This is a simplified version to support the discussion, not an actual implementation.

Endpoints are often exceptional cases; why not just define the evaluation with that exceptional case and move on? All the more so since my attempted trick of “pre-conditioning the data” (replacing the original sequence of control points with a sequence with repeats at the beginning and end) didn’t work? (An important background assumption to the rest of this discussion is that this function is going to be run many many times, since we need a lot of points on that curve to make it look like a smooth curve when we draw it on-screen, or when we send the points to a CNC router to cut metal, or whatever we’re doing with it. If we only needed to do it once, this would probably be the right solution.)

The Problem with Branching

The answer basically has to do with computer architecture and the way it has evolved over the past couple of decades. Once upon a time, when I was programming on my father’s hand-me-down Heathkit Z-80, this would have been a perfect solution — each time we needed to evaluate a B-spline, we’d simply decide if we were in the normal or weird section of the curve, and calculate accordingly. The problem is that as we poured billions of dollars into CPU fabrication techniques, we got more and more transistors, and needed some way to take advantage of them in order to make things run faster. One major solution is “pipelining”, which essentially can be described as doing many things at once. Rather than waiting for each step to complete before we start on the next step, the CPU will get a lot of things started at once, and pull them all together when needed. (Perhaps I’ve been watching too much Iron Chef, but this seems like partitioning work out to a sous chef.) In the extreme case of the P4 architecture, the pipeline was up to 20 stages long. A much more technical discussion can be found at http://arstechnica.com/old/content/2001/05/p4andg4e.ars but the basic takeaway is that for a modern CPU to run fast, it needs to be able to do many things at once.


The B-spline Series: Definitions (Abridged)

Filed under: 3D Graphics — Seth Porter @ 4:24 pm

This is the second part of a so-far three part series. See part one for some context.

For a typical background you can look at MathWorld or Wikipedia. These are pretty representative of the descriptions you’ll find on the internet in casual searching: technically correct, with nicely formatted equations, but not a whole lot of actual help if you’re trying to understand the things. In particular, the recursive definition of the basis functions are very clever but incredibly unintuitive as far as what they mean. (I also notice that the links share my own indecisiveness about how to title-case “B-spline”.)

A much more useful writeup can be found in the “B-spline Curves” section in these excellent class notes. This is a nice treatment with lots of pretty pictures, although I think the author makes things somewhat more complex by staying with the raw 0 to 1 parameterization throughout (so the elements of the knot vector end up as fractions rather than integers). Nothing wrong with it, just a different treatment (and I suspect one motivated by uniform representation across many kinds of curves, rather than my focus on uniform representation for all segments of the B-spline to ease high-volume computation).

(I should say at this point that I’m torn about how to present things. It’s very tempting to start with what I know now, tell a nice clean story, and maybe then offer an amusing anecdote about how even I didn’t quite get it at first… but that’s exactly what the textbook I was working from did (well, except for the amusing anecdote), and it didn’t serve me very well. So instead I’ll let the game tell the story, and just lay it out as it happened to me.)


The B-spline Series: Motivation

Filed under: 3D Graphics — Seth Porter @ 3:51 pm

The B-Spline family of curves are a nice way to make smooth curves from a series of control points. The idea is that the control points give you “handles” to pull the curve around; it won’t actually pass through any of them (except for the first and last ones), but it will be attracted to them while retaining some nice smoothness properties.

That much I knew coming in. For various reasons, I wanted to write a program to evaluate B-splines, with a particular eye to rendering them in 3D. This seemed like it was going to be a very easy task; instead, I learned a lot along the way.


June 27, 2009

The Ugly Truth Behind Pretty Pictures

Filed under: Uncategorized — Seth Porter @ 12:06 am

Or maybe not that ugly, but it makes a better headline.

Last post I promised the backstory behind the pretty pictures. As a fair warning, there are no pictures this time around, just lots of discussion of Linux monitoring tools and their integration. (more…)

June 20, 2009

Server Monitoring with Pretty Pictures

Filed under: Uncategorized — Seth Porter @ 1:40 pm

(Here are the pretty pictures. The actual discussion is after the break.) (And okay, maybe they aren’t that pretty, but I’d say they’re more aesthetic than you could reasonably expect for monitoring health stats on a Linux server. Further defensive parentheticals will be reserved for later in the post.)

For a while now I’ve been meaning to post something about Calliope, my (Debian) Linux server at home. I’ve got a nice draft somewhere about all the useful services I’ve got running, why I chose the packages I did, and so forth. At the end of the day, though, the basic take-away is “pick some services you want to run, install them, and Google till you get them to work”. (I’ll probably post it someday, though perhaps not after giving it such a heart-warming buildup.)

Anyway, the more I think about it, the most distinctive part of Calliope isn’t the  actual services she provides, but rather the tools I’ve assembled for keeping an eye on her. This isn’t like a desktop box, where I’m sitting at the console or the machine is turned off. Instead, most of my interaction is indirect — listening to music on the PS3, or looking at files shared through Windows networking or Apache, or even just acquiring a network address through DHCP. The common thread in all these cases is that I’ll notice abrupt failure, but I’m not directly logged in to see notifications or status messages.

One traditional solution has always been e-mail. I send myself nightly notifications of backup success, mostly because it gives me a warm fuzzy feeling to know that my Subversion repository is safely in at least two places (and on a recent trip, I found it a surprisingly reassuring touch of home), but fundamentally I’m not prepared to page through long logs or status reports on my phone (where I read most of my e-mail). In fact, I’m not prepared to do that even on my desktop or netbook, unless I already know there’s a problem (and remember, theonly readily visible symptoms of problem are “slow” or “failed to connect”).

So to summarize, I’ve got an uninvolved admin staff (myself), who wants things to Just  Work, and doesn’t want to have to explain to his wife that the internet is down right now because he was  in the middle of a project when he got bored with it. Fortunately, I’ve got Debian-stable as a pretty damn rock solid baseline to build from, so most problems will be the result of misconfiguration, user error, or mechanical failure. (The last is also a challenge, since the box is out in the hallway with the cats’ litter box — not a lot of foot traffic to notice things like fan failure or hard drive Squeaks of Doom.) Oh, and because I don’t do Linux server admin for a day job, I’m not necessarily going to be able to distinguish between dire-sounding-but-routine conditions and actual symptoms, since I lack a good intuitive baseline.

Anyway, enough exploration of the problem space. (Well, never enough, but at some point requirements analysis turns into plain old griping.) I actually have a solution that’s working pretty well for me so far, and is probably the biggest difference between this box and previous Windows servers I’ve set up.


December 7, 2008

Vista Sleep Problems

Filed under: Uncategorized — Seth Porter @ 6:36 pm

I don’t think this really counts as a blog without a post about Vista (or XP) sleep and hibernate problems. So, here goes. (more…)

December 3, 2008

Shadow Matrices for Geometry

Filed under: 3D Graphics — Seth Porter @ 12:17 am

It seems like I’m about to head into the dark territory of F# “quotations”, which apparently let you play compiler without all that tedious parsing. (Sorry for the absurdly shallow interpretation, but I haven’t played with them much yet.) Anyway, before I get into that, I thought I’d write up my progress on getting nicely rendered space curves.

Segmented Cylinders

Last post, I’d gotten a unit cylinder to render halfway decently, after some compromises (and throwing more triangles at the thing). After that, I had to translate and rotate them where I wanted them. That should have been a trivial step (“I have a unit vector along the x axis, please map it over here”), but turned out to be more trouble than it should have been. (I won’t tell the full story, but here’s a quick runthrough.) The Matrix.CreateWorld method looked very promising indeed, but kept squashing out any z component (for reasons which became clear later; at the time I was passing the z unit vector as my “up vector”). After some hunting, and reading far more than I ever wanted to about quaternions, I fell back to a simpler approach: I want to overlay the 0-1 line on the x axis onto some line p0 to p1. Clearly, I need to translate by p0 and scale by |p1-p0| (that’s supposed to be the length of the line, if my pseudo-typography isn’t coming through). In almost all cases, I also need to rotate; from basic vector math, the axis of rotation will be the cross product i × (p1-p0) (I can’t get the little hat over the i, but that’s the unit vector in the x direction). The amount of the rotation will be the angle between the vectors: acos(i • (p1-p0)). (I’m particularly fond of UK-based sites, since they tend to refer to “maths”. Juvenile, I know.) This all works fine except for the case when p0 and p1 actually lie on the x axis already; in that case, the cross product goes to all zeros, which was what was flattening out my matrices from CreateWorld (at least I think so; I never went back to test the theory). So I ended up with a scale, a conditional rotation, and then a translation. I was sad about the conditional — even though modern shaders can handle conditionals, it still feels like a mark of shame. There’s probably some way to be clever about this; after all, any axis of rotation will do just fine if you’re rotating zero degrees. But I was in a hurry to get something rendering again, so I gave up for the moment.

Anyway, this was enough to get me from oddly-lit unit cylinders all the way up to this:

Curve as discrete cylinder segments

Sandworm: Curve as discrete cylinder segments

In this case, I’m rendering the first Hermite basis function, since I had code lying around to calculate it. With 20 samples, the curve looks quite smooth (all things considered), but it’s discontinuous, and gets into worse trouble near the ends. (In areas of high curvature, the abruptly changing normals at lines of overlap look distinctly articulated, rather than like a smooth curve.) So the next step was to get at least C0 continuity, and try to do something about the normals.


Older Posts »

Create a free website or blog at WordPress.com.