This wiki is a XML full dump clone of "Heroes Wiki", the main wiki about the Heroes saga that has been shut down permanently since June 1, 2020. The purpose of this wiki is to keep online an exhaustive and accurate database about the franchise.

User:Admin/Journal

From Heroes Wiki
< User:Admin
Revision as of 01:36, 8 April 2015 by imported>Admin (First update in a long time!)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Journal of a Wiki Administrator

This page is just a place for me to discuss some of the more interesting topics that come up from time to time administering a wiki like Heroes Wiki.

Hosting

One advantage of owning your own web hosting company is that it's easy to find a place to host a site like Heroes Wiki. Another advantage is that being in charge of the technical operations I'm not constrained by any third-parties and can pretty much do whatever I need to do to make something work. Compare this to other hosting companies where you might be restricted to using a certain version of PHP or MySQL or where you don't have full control over the http server configuration. It's hard to say whether this advantage resulted in any real benefit. If we'd been restricted to MySQL 4.1 or PHP4 for instance we'd have to run a much older version of MediaWiki. Being able to run the latest software did allow us to keep up with the latest MediaWiki features. Once again, probably didn't make much of a difference but it was nice anyway. Up until June 2007 Heroes Wiki was running on the same server as a bunch of other sites we host. As the wiki became more popular I'd notice that we had a harder time keeping up with the traffic (specifically on Monday night and Tuesday following a new episode). This necessitated the addition of APC, a PHP caching module, since the primary load was being caused by PHP needing to interpret each PHP request on the fly. Had I not had control over Apache it might have been impossible or very difficult to get APC added in a timely fashion. Even with this flexibility it was a little tricky since I was running different versions of PHP concurrently on that one server, but it was accomplished relatively quickly and the wiki soon became much more responsive even during periods of high activity. So in the end the flexibility of managing my own server was instrumental in keeping up with the demands of the wiki.

Moving to a new server

Background

By the time June 2007 came around I decided it was time to move Heroes Wiki off of a shared hosting server and onto a more powerful, dedicated server. There were a few reasons for this. For starters it was becoming ever trickier to cleanly maintain separate versions of PHP and MySQL on the same server. Heroes Wiki was using the newer versions of both software and I had to ensure that any changes to either piece of software didn't impact the older versions that were being used by other websites hosted on the same server. A few times, for instance, a PHP update would clobber some config settings used by the older version and a few sites would run into problems. Then adding APC for PHP caching was trickier because I had to perform some steps manually to ensure it was compiled against the version of PHP being used for Heroes Wiki and not the older one being used for the other sites. Secondly the shared hosting server was running an OS that's getting a little old so it was lacking some of the convenient features in the newer releases that would help me add features more quickly. Third, Heroes Wiki is continually growing so I knew that eventually it would outgrow the server (which is a few years old already). By purchasing a new powerful server I could ensure that we'd be set for some time.

Preparation

If I was going to move the wiki onto a new server, it was important to me to minimize the downtime as much as possible. I often see sites that migrate to a new server being down for hours (and sometimes even days!) Perhaps it's because I manage servers for a living, but I find downtime to be distasteful. Fortunately I know to transparently migrate services to new servers since I do it for work every so often. One of the first tasks was to get the database copied over to and running on the new server. Late one night I just switched the wiki into read-only mode and performed a MySQL database dump on the wiki database. This didn't take too long and the wiki was then switched back into read-write mode. Most people wouldn't have even known unless they tried to edit a page during that brief window. Once the dump was imported on the new server I configured MySQL replication so that any changes to the database on the old server would be immediately replicated over to the new server. Ok, database taken care of. So an rsync of the site from the old server established a base copy of the wiki on the new server. Also fortunately I was already utilizing Squid on the front end on the old server and redirecting it to Apache on the same server. This meant that once I wanted to actually perform the switch to the new server all I technically needed to do was repoint Squid to the new Apache server. Squid was primarily responsible for the transparancy of the migration.

Switchover

When the time came for the switchover I pointed Squid to the new server first and then changed DNS for the wiki to point to the new server. This way anyone still using cached DNS info would go to the old server, but Squid would transparently redirect them to the new server. Anyone getting new DNS info would just go right to the new server. For added safety I put the wiki into read-only mode beforehand to give me time to safely perform one last rsync of the site and make the required config changes, but like before this only took a couple minutes at most and the wiki was still accessible the whole time. Clean, professional migration... just what I wanted.

Motivations

When showing Heroes Wiki to other people one of the most common questions I get is "So how much are you making off of it?" When I tell them that I don't make anything off of it and that I'm not doing it to make money often they try half-heartedly to convince me that I should. I tell them the same thing each time. I explain that it wouldn't be appropriate in my opinion to make money off of it since it's the community that's making it successful. I explain that my only monetary goal is to reimburse myself for the expenses of the wiki (advertising, server cost, etc.) and for the wiki to be self-sustaining. At the same time I love the idea of the wiki being able to give something back which is why I try to donate a good deal of the revenue to charities (at the moment the Pediatric Epilepsy Project, a charity that Greg Grunberg sponsors). A lot of times that kind of social morality can be hard to believe... usually by people who don't actually know me in person. If it's hard to believe, then I offer some other observations. :) Running a wiki isn't a full-time job (until you're the size of Wikipedia for instance). I have a job that pays well, so I don't need to look for ways to supplement my income. Even if I needed to supplement it, advertising revenue on the site doesn't generate more than $100-$200/month currently so if money were an issue I'd be doing something more profitable. :) So the question is what do I get out of it? Well, I've always enjoyed being part of something successful. As popular as Heroes Wiki has become, I consider it to be very successful and I'm proud to be playing a part in it. That feeling of accomplishment is all the payment I need.

Upgrading MediaWiki

I like to keep wikis up to date with recent features and bugfixes. So when a new version of MediaWiki is released I usually wait at least a few weeks to make sure there aren't any critical bugs in it. Once it seems to be stable, I begin plans to update the wikis. I usually begin with one of the less popular localized wikis since if anything went terribly wrong it wouldn't be as catastrophic as if it affected Heroes Wiki. Since I modify the MediaWiki code from time to time I can't just upgrade the software normally by overwriting all the files. I usually take advantage of the release diffs that they put out and just patch the code that way. First I take a copy of the code and apply the patch to that first just to see if there are any conflicts. If it looks good I'll roll it out to one of the localized wikis (making sure to back up the wiki first) and then see how it looks in use. When diffs aren't provided I'll often just make my own based on the stock version I'm using and the new version I want to upgrade to. I've been casually planning the next upgrade. Currently all of the Heroes Wiki sites are running 1.10 which I want to upgrade to 1.11. I tried making diffs, but there were issues that just led me to decide to do a fresh upgrade this time. To do that I had to first get a diff between the stock 1.10 code and the code running on Heroes Wiki so I could see the few changes I had made to the code. I then applied them by hand to the 1.11 code taking the time to check for any changes that had been made in functions that the changed code was using. It didn't take much time at all and when it was finished I had a copy of clean 1.11 code with my desired changes applied to it already. Next came the job of updating the wiki. I wasn't about to just perform an update of the software and database without some testing:

  • Perform quick database dump of Heroes Wiki.
  • Make test copy of all Heroes Wiki code and files.
  • Load Heroes Wiki dump into new test database.
  • Point test copy of Heroes Wiki to correct path and database.
  • Test Heroes Wiki test site to make sure it works.
  • Upgrade Heroes Wiki test site by overwriting with 1.11 files (per MediaWiki upgrade instructions).
  • Run update.php to apply database changes.
  • Test newly upgraded Heroes Wiki test site to make sure it still works.

Ran into a little problem with the newer version and using "pretty urls". Had to add $wgUsePathInfo = false; into LocalSettings.php, but then everything worked beautifully. Going to continue testing for a few days and then take care of the wiki upgrades when the site is less busy.

LISA conference

So this is only tangentially related to my work with Heroes Wiki if at all. I'm currently attending the LISA conference here in Dallas for work. While the number of servers I administer for work doesn't come close to say the number of servers Google or Yahoo have to manage, I still have to administer a good deal of them... enough where it's valuable to get some insight as to how these larger companies are currently handling it. So the conference has a lot of potential. I'd rather know how people are doing it now and where the technology is headed so I'm ready when we do grow even larger. I've already met and talked with a number of other systems administrators here at the conference and it's funny how quickly one systems admin can relate to another one they've just met. Before you know it we're trading stories and trying to help each other out. Ran into a systems admin from Google whom I keep running into between attending this conference and the MySQL conferences out in California. Since this conference is for large system installations it's doubtful anything I pick up will be tremendously useful here on the wiki since this is much smaller than the systems I manage for work... but who knows, maybe I'll get lucky. And there have got to be some Heroes fans here... maybe I can introduce a few to Heroes Wiki. :) ...or maybe someone here at the conference is already on the wiki!

MediaWiki 1.12

I have to admit I'm personally very disappointed with this release. Prior releases have felt much more stable and we ran into a number of issues at Heroes Wiki with this release:

  • Some problems when variables in transclusions are dynamic (e.g. when the ParserFunctions extension is used to dynamically set a value). Specifically this seemed to happen when we were doing nested transclusion (e.g. a template that uses a sub-template). This was the result of a new parser that 1.12 uses. This new parser also causes problems with a few extensions including DPL, so once I discovered this was the problem I promptly switched back to the old parser ($wgParserConf['class'] = 'Parser_OldPP'; in LocalSettings.php).
  • Redirects in the Image namespace weren't working properly. I believe this only occurred when we were redirecting an image that did exist to a different image. Perhaps this behavior was never really intended as it is a little counter-intuitive, however we use this "feature" to display one version of an image while redirecting visitors to a different (usually larger or sometimes higher resolution) version when it's actually clicked. This one was a tricky one to track down and had me going through several components of MediaWiki (such as the filerepo code) to find out what was interfering. To restore the functionality I had to rip out a line of relatively recently added code to Wiki.php. Hopefully it wasn't there to fix some other important bug, but it was necessary to restore this feature we actually use here.
  • The behavior of shared file repositories seemed to change, too. This didn't affect the English site, but it did affect all the localized wikis. The shared file settings allow those sites to access images on the English site if they reference them and haven't been uploaded locally to the site. This keeps files from having to be duplicated all over the place and generally makes it easier to transfer and translate content from the English site to those sites. The images themselves were still working, but whereas clicking on them used to link you back to the English site they now were linking directly to the localized site. Since the image didn't exist there people were treated with a generic and unhelpful error. After reading through some code and configurations I ended up reconfiguring the localized wikis setting $wgSharedUploadDBname to the Heroes Wiki database name and setting $wgRepositoryBaseUrl = "http://heroeswiki.com/Image:". This gave the localized wikis enough information to be able to display the necessary information about the file. When you visit the image link now it explains that it's a shared image and provides a link to the master copy. On the bright side, it allows each localized wiki to add their own descriptions and categories to the file without having to change the master copy. So in the end this turned into a beneficial feature, I was just not pleased that it broke so dramatically during the upgrade and that I had to dig around to find out how to fix it.

Server Move: The Next Generation

So frankly I forgot this journal was even here until just now, so here's some information on the most recent server move in March 2014. The previous server had been in place since June 2007 and was starting to get a bit dated. As the site was running on a single server certain hardware failures could have taken the site down for an extended period of time while I worked on restoring it, despite many of its internal components being redundant. The software available for the operating system was also a bit dated which would limit my MediaWiki upgrade options... or at least those that could be done safely. As I'd been doing quite a bit of sysadmin work on cloud providers I decided it would be a good opportunity to bring the site into the current decade. Because the site had much less activity now than it did during the last upgrade there was less impact with the switchover, though as usual I did take steps to minimize downtime. At the same time as I switched servers I was also upgrading the backend database software as well as MediaWiki. After running a test migration and upgrade and knocking out a few bugs, it was a simple matter of setting the old wiki to read-only mode while I dumped the databases and synced the site files. Once the new site was up a simple repointing of DNS allowed traffic to bleed off the old server and onto the new over the next few hours. Once the old site was no longer being accessed, the system was powered down. Good night, you served us well!

On the new server, with new database, and new MediaWiki there were a few outstanding issues discovered. Some of the DPL criteria used for searching (like finding all pages changed by a particular user) were taking way too long to execute so I had to disable them as it was driving up the system load. Fortunately they don't appear to have been used for anything of importance. I also ran into some system memory issues early on, but this was addressed by simply bumping the wiki up to the next instance type. This is one of the huge benefits to being run on a cloud provider. I have many more options for scaling the site based on the load without having to purchase and deploy additional hardware.