eworldproblems
  • Home
  • About
  • Awesome Ideas That Somebody Else Already Thought Of
  • Perl defects
  • Books & Resources
Follow

Posts in category lang

Introducing Prophusion: Test complex applications in any version of PHP



Putting together testing infrastructure for Curator has been an interesting project onto itself. On my wishlist was:

  • Support for a wide range of PHP interpreter versions, at least 5.4 – current
  • In addition to the unit test suite, be able to run full integration testing including reads/writes to actual FTP servers.
  • Keep the test environment easy enough to replicate that it is feasible for developers to run all tests locally, before submitting a pull request.

By building on Docker and phpenv, I was able to meet these requirements, and create something with more general applicability. I call it Prophusion, because it provides ready access to over 140 PHP releases.

For a quick introduction to Prophusion including a YouTube of it in action, check out this slide deck.

I’ve since fully integrated Prophusion into the testing pipeline for Curator where it happily performs my unit and in-depth system tests in the cloud, but I also make a habit of running it on my development laptop and workstation at home as I develop. You can even run xdebug from within the Prophusion container to debug surprise test failures from an xdebug client on your docker host system…I’m currently doing that by setting the right environment variables when docker starts up in my curator-specific test environment. I’ll port those back to the prophusion-base entrypoint in the next release of the prophusion-base docker image.

Prophusion includes a base image for testing in the CLI (or FPM), and one with Apache integrated for your in-browser testing.

Posted in devops, PHP

Automatic security updates and a support lifecycle: the missing links to Backdrop affordability



I saw Nate Haug and Jen Lampton give their Backdrop CMS intro talk last weekend at the Twin Cities Drupal Camp. They have already done much to identify and remedy some big functionality and UX issues that cause organizations to leave Drupal for WordPress or other platforms, but you can read more about that elsewhere or experience it yourself on Pantheon.

Their stated target audience is organizations who need more from their website than is WordPress’s primary use case, but still don’t need all the capability that Drupal 8’s software engineering may theoretically enable — everyone doesn’t need a CMS that supports BigPipe or compatibility with multiple heterogeneous data backends.

That positions them squarely in the same space in the same target market as Squarespace — but whereas Squarespace is a for-profit business, Backdrop is open-source software, and affordability to the site owner is so important to the project that it gets a mention in Backdrop’s single-sentence mission statement.

Simplified site building for less-than-enterprise organizations is a crowded space already. The Backdrop philosophy identifies a small area of this market where a better solution is conceivable, but I think a big reason so much emphasis is given to goals and philosophy on the Backdrop website and in their conference session is that Jen and Nate recognize their late entry in this crowded market leaves little room for Backdrop to miss in achieving its goals. Backdrop’s decision makers definitely need to keep a clear view of the principles the project was founded for in order for Backdrop to positively differentiate itself.

Affordability is one potential differentiator, and I am personally happy to see it’s one already embodied by Backdrop’s promises of backwards compatibility and low server requirements. But, frankly, WordPress has got those things already. An objective evaluation of Backdrop’s affordability in its current form would put it on par with, but not appreciably better than a more established competitor. But there is hope, because:

Current CMSs don’t give site owners what’s best for them

To make converts, Backdrop could truly differentiate itself with a pledge to offer two new things:

  1. Security backports for previous releases, with a clearly published end-of-support date on each release, so site owners can address security issues with confidence that nothing else about their site will change or stop working.
  2. A reference implementation for fully automated updates, so that installed sites stay secure with zero effort.

Here’s why.

Let’s assume there’s a certain total effort required to create and maintain that highly customized website described by Backdrop’s mission statement. Who exerts that effort is more or less transferable between the site builder and the CMS contributors. On one extreme, the site owner could eschew a CMS entirely and take on all the effort themselves. Nobody does this, because of the colossal duplication of exertions that would result from all the sites re-solving many of the same problems. Whenever a CMS takes over even a small task from site builders, a mind-boggling savings in total human hours results.

Consider this fact in the context of CMS security. From a site owner’s perspective, here’s the simplest-case current model in Drupal, Backdrop, WordPress, and other popular open-source CMS’s for keeping the sites they run secured over time. This model assumes a rather optimistic site owner who isn’t worried enough about the risk of code changes breaking their site to maintain a parallel development environment; organizations choosing to do this incur even more steps and costs.

  1. A responsible individual keeps an eye out for security updates to be released. There are myriad ways to be notified of them, but somebody has to be there to receive the notifications.
  2. When the individual and/or organization is ready, an action is performed to apply the latest version of the code, containing the security update as well as non-security changes the developers have made since this site was last updated. Sometimes this action is as simple as clicking a button, but it cannot be responsibly automated beyond this point due to the need to promptly perform step 3.
  3. The site owner browses around on their site, checking key pages and functions, to find any unintended consequences of the update before their customers do. (For non-enterprise, smaller sites, let’s assume an automated testing suite providing coverage of the highly customized features does not exist. Alternatively, if it did exist, there would be a cost to creating it that needs to be duplicated for each instance of the CMS in use.)
  4. In practice, the developers usually did their job correctly, and the update does not have unintended consequences. But sometimes, something unexpected happens, and a cost is incurred to return the site to its past level of operation, assuming funds are available.

Once a site has been developed, unsolicited non-security changes to the code rarely add business value to the organization operating the site. In the current model, however, changes are forced on organizations anyway as a necessary component of maintaining a secure site, merely because they are packaged along with the security update. In my opinion, the boldface observation above ought to be recognized as one of the principles guiding Backdrop’s philosophy. In the classic model, the CMS avoids a small task of backporting the security fix to past releases and the work is transferred to site owners in the form of the above steps. That’s expense for the site owner, and in total it is multiplied by each site the CMS runs — a much larger figure than offering a backport would have amounted to.

This is a clear shortcoming of current offerings, and Backdrop’s focus on affordability makes it a ripe candidate for breaking this mold. Not to mention the value proposition it would create for organizations evaluating their CMS options. Heck, make a badge and stick it on backdropcms.org/releases:

Supported

3 years

Stable

Backdrop could guarantee security updates will not disrupt the sites they run; the competition could only say “A security update is available. You should update immediately. It is not advisable to update your production site without testing the update first, because we introduced lots of other moving parts and can’t make any guarantees. Good luck.”

That’s something I think developers and non-technical decision makers alike can appreciate, and that would make Backdrop look more attractive. Don’t want to pay monthly dues to Squarespace? Don’t want to pay periodic, possibly unpredictable support fees to a developer? Now you can, on Backdrop.

The above case that a software support lifecycle would make site maintenance more affordable to site owners does not even begin to take into consideration the reality that many sites simply are not updated in a timely fashion because the updates aren’t automated. If you are not an enterprise with in-house IT staff, and you are not paying monthly premiums to an outfit like Squarespace or a high-end web host with custom application-layer firewalls, history shows a bot is pretty guaranteed to own that site well before you get around to fixing it. Exploited sites are in turn used to spread malware to their visitors, so adding automated updates to the CMS that can be safely applied, rapidly, without intervention would have a far-reaching overall impact on Internet security.

But how achievable is this?

Isn’t extended support boring to developers volunteering their time?

Yes, probably. Top contributors might not be too psyched to do backports. But just as in a for-profit development firm, developers with a range of abilities are all involved in creating open-source software. Have top contributors or members of the security team write the fix and corresponding tests against the latest release, and let others merge them back. The number of patches written for Drupal core which have never been merged has I think demonstrated that developer hours are eminently available, even when the chance of those hours having any impact is low. Propose to someone that their efforts will be reflected on hundreds or thousands of sites across the Internet in a matter of days, and you’ll get some volunteers. Novice contributors show up weekly in #drupal-contribute happy to learn how to reroll patches as it is. Security issues might be slightly more tricky in that care needs to be taken to limit their exposure to individuals whose trust has been earned, but this is totally doable. Given the frequency of core security releases in Drupal 7, a smaller pool of individuals known personally by more established community members could be maintained on a simple invite-only basis.

Update, April 2017
I discussed how achievable backports within major versions of Drupal could be in a core conversation at DrupalCon Baltimore. The focus related especially to the possibility of extending the model where official vendors have access to confidential security information to support Drupal 6 LTS. Participants included many members of the security team and a few release managers; the youtube is here.

Some interesting possibilities exist around automating attempts to auto-merge fixes through the past releases and invoke the test suite, but rigging up this infrastructure wouldn’t even be an immediate necessity.

Also, other projects in the wider, and even PHP FOSS world show us it can be done. Ubuntu made software lifecycles with overlapping supported versions famous for an entire Linux distribution (though most all of the other distros also pull it off with less fanfare, chalking it up as an implicit necessity), and it’s even been embraced by Symfony, the PHP framework deeply integrated into Drupal 8. While Drupal adopted  Symfony’s software design patterns and frequently cites this as one of Drupal 8’s strengths, they didn’t adopt Symfony’s software lifecycle practices. In this regard, Drupal is increasingly finding itself “on the island.” Hey, if Backdrop started doing it, maybe Drupal would give in eventually too.

What about contributed modules?

I would argue that a primary strategy to handle the issue of contrib code should be to reduce the amount of contrib code.  This fits well with a Backdrop initiative to put the things most people want in the core distribution of the product.  A significant number of sites — importantly, the simplest ones that are probably least inclined to think about ongoing security — would receive complete update coverage were backports provided only for core. The fact that contrib is there in CMS ecosystems is sometimes cited as a reason security support lifecycles would not be possible, but it’s no excuse not to tackle it in the CMS’s core.

Contrib will always be a major part of any CMS’s ecosystem though, and shouldn’t be left out of the opportunity to participate in support lifecycles.  I would propose that infrastructure be provided for contrib developers to clearly publish their own support intentions on their project pages. Then, when a security issue is disclosed in a contrib module, the developer would identify, by means of simple checkboxes, the versions that are affected. There would be no obligation to actually produce a patched version of old releases identified as vulnerable, however, regardless of previously published intentions. This would have two effects: A) the developer would be reminded that they committed to do something, and therefore might be more likely to do it, and B) sufficient data would be available to inform site owners of a security issue requiring their attention if the contrib module chose not to provide a backported fix. Eventually, the data might also be used as a statistic on project pages to aid site builders in selecting modules with a good support track record.

Aren’t automated updates inherently insecure?

No, although some web developers may conflate performing an automatic update with the risks of allowing code to modify other code when it can be invoked by untrusted users. A reference implementation of an automatic updater would be a separate component from the CMS, capable of running with a different set of permissions from the CMS itself.

Brief case study: “Drupalgeddon”

Here’s the patch, including its test coverage, that fixed what probably proved to be the most impactful security vulnerability in Drupal 7‘s history to date:

drupalgeddon

The fix itself is a one-line change in database.inc. Security patches are, as in this case, often very small and only have any impact on a site’s behavior in the face of malicious inputs. That’s why there’s value in packaging them separately.

Drupal 7.32, the version that fixed this vulnerability, was released in October 2014. A

git apply -3 drupalgeddon.patch

is able to automatically apply both the database.inc and database_test.test changes all the way back to Drupal 7.0, which was released almost four years earlier in January 2011. Had the infrastructure been in place, this fix could have been automatically generated for all earlier Drupal versions, automatically verified with the test suite, and automatically distributed to every Drupal 7 website with no real added effort on the part of the CMS, and no effort on the part of site owners. Instead, in the aftermath, tremendous time energy and money was expended by the site owners that were affected or compromised by it, with those that didn’t patch in a matter of hours facing the highest expenses to forensically determine if they were compromised and rectify all resulting damages.

You better believe botnet operators maintain databases of sites by CMS, and are poised to use them to effectively launch automated exploits against the bulk of the sites running any given CMS within hours of the next major disclosure. So, unless the CMSs catch up, it is not a matter of if this will happen again, but when.

The only way to beat the automation employed by hackers is for the good guys to employ some automation of their own to get all those sites patched. And look how easy it is. Why are we not doing this.

Final thoughts

A CMS that offered an extended support lifecycle on their releases would make site ownership more affordable and simpler, and would improve overall Internet security. Besides being the right thing to do, if it made these promises to its users, Backdrop would be able to boast of a measurable affordability and simplicity advantage. And advantages are critical for the new CMS in town vying for market share in a crowded and established space.

Posted in dev, Drupal, PHP

JavaScript’s Object.keys performance with inheritance chains



Backstory: (You can cut to the chase) I’m working on a JavaScript library in which I want to identify any changes a method has made to properties of the object it is applied to, preferably in a more intelligent way than cloning all the properties and comparing them all before and after invocation. This can be done by applying the method in question to an empty object x inheriting from the true/intended object, and looking for any properties in x after the method is applied.

But, how does one efficiently look for the properties in x? In JS implementations supporting ECMAScript 5, Object.keys looks like a promising candidate. The function is decidedly O(n), but I wanted to be sure that with ES5, “n” is the properties of just my object, and not properties of all objects in the inheritance chain, like the ES3 polyfill.

(The chase:) My test on jsperf.com shows that yes, in browsers with ES5, Object.keys doesn’t waste time iterating properties of the inheritance chain and discarding them.

See also a simple html page with some primitive timings, which basically says the same thing:

<html>
<head>
<script>
function BaseClass() {}
for (var i = 0; i < 10e4; i++) {
  BaseClass.prototype["p" + i] = i;
}

function SubClass() {
	this.thing = 'x';
}
SubClass.prototype = BaseClass.prototype;

var objInTest = new SubClass();


function polyfillKeys(obj) {
  var output = [];
  for (var key in obj) {
    if (obj.hasOwnProperty(key)) {
      output.push(key);
    }
  }
  return output;
}
// could do Object.keys = polyfillKeys

var start = new Date();
var keys = Object.keys(objInTest);
console.log("Native Object.keys: " + (new Date() - start));

start = new Date();
var keys = polyfillKeys(objInTest);
console.log("Polyfill way: " + (new Date() - start));
</script>
</head>
<body></body>
</html>
Posted in JavaScript

PHP’s mysqli::reap_async_query blocks



Just a quick note about mysqli::reap_async_query since the official documentation is surprisingly lacking, and I’ve never had any luck getting user-contributed comments posted to the php pmanual.

The manual has this to say about mysqli::query(“…”, MYSQLI_ASYNC): “With MYSQLI_ASYNC (available with mysqlnd), it is possible to perform query asynchronously. mysqli_poll() is then used to get results from such queries.”

Does this mean that the only safe way to call reap_async_query is to first poll for connections with complete queries, and only call reap_async_query on connections that you know have results ready? No. Here’s a quick sample script to show what happens if you skip mysqli_poll() –

<?php
$dbhost = "localhost";
$dbuser = "someuser";
$dbpass = "somepass";
$dbschema = "db_name";

$mysqli = new mysqli($dbhost, $dbuser, $dbpass, $dbschema);
$mysqli->query("SELECT SLEEP(5) AS sleep, 'query returned' AS result", MYSQLI_ASYNC);

echo 'Output while query is running...<br>';

$result = $mysqli->reap_async_query();
$resultArray = $result->fetch_assoc();

echo 'Got back "' . $resultArray["result"] . '" from query.';

outputs (after 5 seconds):

Output while query is running...<br>Got back "query returned" from query.

So, it appears that reap_async_query will block waiting for the query running on the mysqli instance to complete, if that query is not ready yet. So, in many cases, there is no need to use mysqli_poll() at all.

Posted in PHP

JSON and data: URLs can show CSS Sprites a thing or two



Concept

I’m not usually as much into web performance as some, but I had a project in mind recently that included a requirement to load a large number of small photos onto a single page. On this project, I wanted to nail the technical approach and attend to every detail, from using optimizing php compilers & memcached to cooperating with the network for optimized transfer times. So, I put some thought into how to best deal with the high image counts. The solution I came up with was a little different from anything I’d seen before, and delivers complete page load times that routinely exceed 3x speedup over no optimization. It’s got a different set of benefits and drawbacks from other optimizations out there, but for typical use cases involving lots of image tags on one page it pretty much puts the standard css sprites tactic to shame.

The basic concept is similar to CSS sprites: since the major http server and browser manufacturers failed to provide the world with bug-free and interoperable http pipelineing, page authors can attain better performance by reducing the total number of requests their pages require. This minimizes time between reuquests when no data is actually getting transferred. In order to retain the multimedia richness of the page yet reduce requests for separate image resources, you need to find clever ways of packaging multiple resources into a single http response.

My twist on this was to package the multiple images by base64-encoding their byte streams and framing them with JSON, instead of packing many distinct images into one bigger image file and taking scissors to it on the client. I call it “JSON packaging.” It might sound impractical and unlikely at first, but the downsides that may initially come to mind (base64 space and decoding overhead) turn out to have surprising silver linings, and on top of that it has some nice additional advantages over css sprites.

First, to retain this post’s credibility and your interest, I’ll try to lower the first big red flag that may be in your mind at this point – when you base64 encode binary streams with 8-bit character encodings, you make them 33% bigger. A little more overhead is added by the JSON notation. How can any attempt at optimization succeed if it requires 33% more data to be transmitted? Well, on many networks it wouldn’t, but thanks to gzip compression, that’s not what ends up happening. Applying gzip compression to the completed JSON object produces an unexpected result: the compressed base64 data actually becomes smaller than the sum of the sizes of the original images, even if the original images are also gzipped. Why is this? Because gzip has more than one image’s worth of data to work with, it can identify and compress more occurrences of more patterns. For this reason you would expect better compression if you compressed all the original images as one big file (and css sprites also benefit from this phenomenon), but starting with a 33% overhead due to the base64 encoding, this outcome defies conventional wisdom.

Okay, so what are the advantages of JSON packaging over css sprites?

  • Unlike with css sprites and usual front-end performance rules, you don’t have to keep track of the dimensions of each image. With css sprites, this is a must for everything to get cropped and displayed as intended. (though I’ve seen IE fail and show edges of other sprites on tightly-packed backing images when page zoom isn’t 100% anyway.) In addition, because large batches of images are presented to the renderer in one shot, page reflow counts are kept in check when you don’t know or choose not to state the image’s size in markup. This helps to counteract the cost of the base64-decode.
  • It’s easier to dynamically generate these puppies than CSS sprites. You almost never see CSS sprites constructed for anything beyond a set of static icons and branding of the partiular site. One reason for this is because you would need to set up webservers that could collect the images of interest, render them all onto a new canvas, and recompress them into a new image file in a performant fashion. With JSON packaging, the load on the servers for generating customized packed resources is reduced.
  • It’s much better suited to photography, since it does not require an additional lossy recompress – the original image file’s bytes are what is rendered.
  • You can use appropriate color palletes, bit depths, and compression formats on an image-by-image basis, yet still achieve the enhanced gzip performance of css sprites.

Proof-of-concept

I put together a few proof-of-concepts which you can check out for yourself. I also measured some performance data from these. Each example contains two static html pages and 100 img tags. One page is unoptimized – URLs to each individual image are included in the img tags. The other page references four external <script> resources and includes no src for the img tags.

Example 1

100 small jpg’s. 4kb to 8kb per file. Unoptimized | JSON-Packaged

Example 1 is designed to approximate the typical real-world scenarios. In particular, when you’re showing a bunch of images on one page, typically each individual image isn’t all that big (in this example the long dimension is 200px.) In most all permutations of network type, browser, and client CPU speed that I have tried, the JSON-Packaged case performs better during initial page loads, and usually at least halves total page time. Here’s some specific figures from some of my test runs to give you an idea. These are approximate averages of multiple runs, not highly scientific but intended to illustrate what is typical:

Initial load times
Computer
Browser 3 Ghz Core 2 Quad (Q6600) 1.6 Ghz Atom (N270)
IE 9 508 ms packaged1,127 ms unpackaged 540 ms packaged1,672 ms unpackaged
FF 13 466 ms packaged1,117 ms unpackaged 723 ms packaged1,590 ms unpackaged

This table shows time to render the complete page if the files are cached and not modified, but the browser revalidates with the server each resource to see if it has changed:

Cached load times (revalidated)
Computer
Browser 3 Ghz Core 2 Quad (Q6600) 1.6 Ghz Atom (N270)
IE 9 172 ms packaged1,197 ms unpackaged 203 ms packaged1,200 ms unpackaged
FF 13 180 ms packaged1,062 ms unpackaged 180 ms packaged1,079 ms unpackaged

This table shows time to render the complete page when techniques like expire times have been used to inform the browser’s caching system that it’s not necessary to revalidate the resource, and no roundtrips to the server occur:

Cached load times (not revalidated)
Computer
Browser 3 Ghz Core 2 Quad (Q6600) 1.6 Ghz Atom (N270)
IE 9 120 ms packaged16 ms unpackaged 145 ms packaged64 ms unpackaged
FF 13 62 ms packaged42 ms unpackaged 375 ms packaged320 ms unpackaged

I obtained these figures over the public Internet with a latency to the server of 54 ms and bandwidth of about 15 mbps. Results favor packaging even more on more latent links. On very low latency networks like a LAN, this is not an optimization, but the whole thing is sufficiently fast either way in that environment. Similarly, if cached on the client in such a way that the client does not revalidate each resource with the server, this is not an optimization, but the inconsistent results above show that at this speed, other factors play a bigger role in overall load time, but the whole thing is sufficiently fast no matter what.

Finally, here’s a table of total bytes that need to be transferred with the packaged/unpackaged samples under various compression scenarios. As you can see, packaging also results in a little bit less data transfer:

Bytes: base64 in JSON vs. separate files
Sample 100 Separate Files Base64 in four JSON objects
best gzip fastest gzip no compression best gzip fastest gzip no compression
small jpg (mostly 200×133 px) 509,883 (-2.3%) 510,630 (-2.1%) 521,763 (reference) 460,281 (-11.8%) 473,169 (-9.3%) 696,372 (+33.5%)

Example 2

100 medium jpg’s. 40kb to 90kb per file. Unoptimized | JSON-Packaged

Example 2 is designed to experiment with how far you can take this – it uses much larger image files just to see what happens. The dimensions of these images are large enough that it doesn’t work well to show them all on one page, so this is probably not a typical real-world use case. Here the results I obtained favor the “unoptimized” version due to the increased base64 decoding overhead. Thus, I’ll leave it at that – you’re welcome to test it out yourself for some specific figures.

Implementation Notes

Once you have your image data as base64-encoded JavaScript strings, some very light JavaScript coding is all that is needed to produce images on your page. Thanks to the often-ignored data: protocol handler supported by all browsers nowadays (detail here), all you need to do is set your image’s src attribute to the base64-encoded string, with a little static header indicating how the data is encoded. In my example, I pass a JS array of base64 strings to the function unpack_images, which simply assigns it to an img already on the document. In an application you would invent a more complex scheme to map the base64 data to a particular img in the DOM, such as creating the DOM images on-the-fly or including image names in the JSON.

function unpack_images(Packages) {
  for(var i = 0; i < Packages.length; i++)
  {
    document.getElementById('img' + i).src = "data:image/jpg;base64," + Packages[i];
  }
}

Using four separate js files to package the images wasn’t an arbitrary decision – this allows the browsers to fetch the data over four concurrent TCP streams, which results in faster transfers overall due to the nature of the beast. (This is what makes this approach superior to simply stuffing all your data into one massive integrated html file.) Also, I tweaked my initial version of Example 1 a little bit to enable the base64-decoding to commence immediately when the first js file has completed transferring, while the remaining files still finish up. To do this, place your unpack_images function in a separate <script> tag, and somewhere below that in your html page add script tags to your js files with the defer attribute:

 <!-- image data package scripts -->
<script type="text/javascript" src="package-1.js" defer></script>
<script type="text/javascript" src="package-2.js" defer></script>
<script type="text/javascript" src="package-3.js" defer></script>
<script type="text/javascript" src="package-4.js" defer></script>

Then, just wrap your JSON data in a call to unpack_images directly in your package.js files (yes, it’s not a pure JSON object anymore):

 unpack_images(["base64data_for_image_1", "base64data_for_image_2", ...]);

This tweak saves 80 – 100 ms over loading all the data first and then decoding it in my Example 1.

All the content in these examples except the individual image files was generated using this script, if you want to pick it up and run with it.

Conclusion

By my analysis, this technique seems to put css sprites to shame in just about any use case. As a word of caution, though, both css sprites and JSON packaging don’t play very nice with browser caches, since they only allow the storage of one entity per http request. Consider the common case where a summary page shows dozens of product images, each linking to a product details page. The first time the user visits your site’s summary page, you are probably better off delivering the images in packages. On the other hand, you want to avoid referencing the packaged set on the product details page in case the user entered your site directly to the details page, but it would be nice if you could fetch the particular product’s image from the already cached package if you’ve got it in cache already. It’d be nice if there was a JavaScript API that allowed you to save script-generated resources to the browser cache with any url under the window.domain, but until that happens this is the ugly side of css sprites and JSON packaging.

 

Posted in HTML - Tagged performance

Recent Posts

  • Reset connection rate limit in pfSense
  • Connecting to University of Minnesota VPN with Ubuntu / NetworkManager native client
  • Running nodes against multiple puppetmasters as an upgrade strategy
  • The easiest way to (re)start MySQL replication
  • Keeping up on one’s OpenSSL cipher configurations without being a fulltime sysadmin

Categories

  • Computing tips
    • Big Storage @ Home
    • Linux
  • dev
    • devops
    • Drupal
    • lang
      • HTML
      • JavaScript
      • PHP
    • SignalR
  • Product Reviews
  • Uncategorized

Tags

Apache iframe malware performance Security SignalR YWZmaWQ9MDUyODg=

Archives

  • June 2018
  • January 2018
  • August 2017
  • January 2017
  • December 2016
  • November 2016
  • July 2016
  • February 2016
  • January 2016
  • September 2015
  • March 2015
  • February 2015
  • November 2014
  • August 2014
  • July 2014
  • April 2014
  • February 2014
  • January 2014
  • October 2013
  • August 2013
  • June 2013
  • January 2013
  • December 2012
  • November 2012
  • September 2012
  • August 2012
  • July 2012

Blogroll

  • A Ph.D doing DevOps (and lots else)
  • gavinj.net – interesting dev blog
  • Louwrentius.com – zfs@home with 4x the budget, other goodies
  • Me on github
  • My old edulogon.com blog
  • My old GSOC blog
  • My wife started baking a lot
  • Now it's official, my wife is a foodie

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

EvoLve theme by Theme4Press  •  Powered by WordPress eworldproblems