Trygve.Com > treenet topology Translate this page: Chinese (Simplified) Français Deutsch Italiano Japanese Korean Russian Español
home networking

treenet topology

Nyx Net,
http://www.nyx.net
the oldest operating
free public access ISP


(and some of the other stuff that lives in the basement)
central network rack
central network rack
and dialup access

This corner of the network connects the internal machines to the rest of the world. On the left are the Cisco routers that route traffic to and from the Internet; above those are a stack of 3Com 3300 series switches that are connected through the one gigabit backplane provided by the 3c16960 Matrix module installed in the base unit. Eventually, the main network will probably migrate over to the 3Com Corebuilder 9000 series units to their right, which would add near-wire-speed layer-three switching and a combined switching fabric capacity of 48-gigabits per second.

trygve logo
Trygve.Com
sitemap
what's new
FAQs
diary
images
exercise
singles
humor
recipes
media
weblist
internet
companies
community
video/mp3
comment
contact

It might not be this year's state-of-the-art telecommunications system, but I think that forty-eight gigabits per second is sufficient for almost any home network, with or without an ISP in it.

The main routers are a pair of identically-configured Cisco 3620 modular routers, only one of which is used at any given time, and the other is a spare, already pre-configured and ready to go.

cisco routers and 3com switches
Cisco 3620 modular routers and 3Com 3300 series switches
shelves of individual modems
dialup access, version 1.0 - Paradyne channel bank, lots of external modems, and a Portmaster 2e

Dialup access has gone through a lot of changes over the past few years. At first, the modems took up an entire rack unit (note the stylish Black Box passive A/B serial switches being used as shelves) with a Paradyne analog channel bank and twenty-four assorted external modems, each connected to one port on a Portmaster 2, graciously donated to Nyx by Livingston (now Lucent).

The individual modems were eventually replaced with Boca rack-mounted modem banks (donated by Tommy Bowen), bringing all the dialup lines up to 33.6.

When the Portmaster 2e failed to come back up after an extended power failure, I tried tinkering with a USR Total Control digital modem bank that I'd picked up for next-to-nothing, since it was believed dead and had only a token-ring network interface. Disabling the management card, however, brought the T1 interface and the collection of analog/digital modem cards to life. Without a working network interface, however, I ended up taking the serial outputs of the individual modems and running them down to an Annex 4000 terminal server, and we were back in business.

Theoretically, I should have been able to squeeze v.90/56k service out of the Total Control unit, but I never quite managed to do it. (Partially being limited by the detail that once the lines were back in use, I couldn't do any tinkering that would threaten to take down Nyx's dialup service.) About a year later, I scared up a Lucent Portmaster 3, and Nyx finally made the jump to 56k service. (whew!)

In keeping with my general trend towards sparanoia, I've since collected two more spare Portmaster PM3s, just in case, and the Total Control and Annex 4000 have finally been taken down to make more space available on the rack.

Also in this section are Hermes, the mail server, and Nyx0, a dedicated nfs server that stores all the user data for anyone with a shell account on one of the login machines. Because of the ever-increasing demands of handling and filtering junk email ("spam"), Nyx's mailserver has been upgraded more times than any other machine. As of this writing, it's now up to a dual Athlon-MP 1800+ system with 2.5 Gigabytes of memory and a half-dozen 18G Seagate Cheatah 10k drives.

Even though most of their client machines are Sparc-based systems running SunOS or Solaris, both of these machines are x86-based systems running Linux. The main reason for picking Linux over Solaris for Intel for these machines was the performance problems reported for the Solaris/x86 drivers for the AMI MegaRAID/Enterprise series of controllers, which are used in almost all the x86 servers out here.

nfs and email servers
mail server and dedicated nfs server
Nyx, Nyx10, Noc, and Arachne (the 'old' Nyx machines)
Nyx, Nyx10, Noc, and Arachne
(the 'old' Nyx machines)

Nyx's eponymous server, Nyx.Nyx.Net, is the main login machine still running SunOS; it's currently a Sparc 20 with a 150MHz Ross Hypersparc processor and 512Meg of memory. I have lots of faster machines (even quad-hypersparc 4m architecture systems), but despite what Sparc stands for, SunOS 4.1.4 doesn't support multiple CPUs or any of the UltraX machines, so that's the fastest system I have around to use with that software that old.

We've recently put the new generation of login machines into service: three Sun dual-300 Ultra 2s with 2 gigabytes of memory, unimaginitively named Nyx1, Nyx2, and Nyx3. Users have been slowly migrating over to those systems as all the kinks are being worked out of the jump from SunOS 4.1.4 to Solaris 8, and eventually these will take over as Nyx and Nyx10 are retired.

here come the suns
The "New Nyxen" -- the newer, faster login machines

The only other remaining Sun 4m-based machine had been Arachne (web server, Sparc5/110), but that was recently replaced with a dual Pentium-III/850MHz system with 2 Gigabytes of registered ECC memory, an AMI MegaRAID Elite 1500 with 128 Megabytes of ECC cache, and six 18Gig Seagate Cheetah 10K drives.

Arachne's new avatar is running Slackware 8.1.1 and its drive arrays are now set up with the Ext3 journalling filesystem.

Iris and Irys, news machines
Iris and Irys,
(news machines)

Iris is the older news machine, serving the primary server for local and internal newsgroups and as a backup news server for the remaining Usenet groups. Its drives and filesystem is set up in the "partition one really big drive" model and is the last machine to still be set up that way.

Irys is the "newer" news machine, and was configured with a little more thought and planning ahead of time, instead of following the "oh my gosh, the news machine just crashed; quick--let's install the news software on the first machine I can get my hands on!" approach.

Irys is currently a dual 500MHz PIII Xeon system with one gigabyte of registered ECC memory--like most of the older x86-based servers, the hard drives are SCSI RAID-5 arrays running on AMI MegaRAID 928 (aka Enterprise 1200) controllers.

AMI MegaRAID 428
AMI MegaRAID 428

The MegaRAIDs feature three independent ultra-wide SCSI channels, an I960 processor, and up to 128Meg of memory on the controller. Both Irys (news server) and Nyx0 (nfs) are maxed out with the full 128Meg on the card.

I've been extremely happy with the performance and reliability of the MegaRAIDs and their cross-platform support is excellent. (Except for the aforementioned Solaris driver bugs and, of course, Microsoft's 95/98/ME product line which are the only operating systems I know of that don't support the MegaRAIDs or, for that matter, any high-end SCSI RAID controllers.)

Most of the older single-purpose servers are built with three drives or RAID arrays: one boot/system, one "content," and one dedicated log drive. For single-purpose (not login) machines, everything except content and logs go onto a single drive or partition. In Irys' case, the system "drive" is a four-drive RAID 5 array; "content" is another RAID 5 array with eight 4.3 gig drives, and the log drive is a single 2 gig drive. All drives currently installed are Seagate Baracudas, except for the log drive which is one of the slower "Hawk" series.

some of the trygve.com servers
web, mail, and dns servers
for trygve.com, et al

I keep revising my philosophies of server construction with time, and I've recently stopped having a dedicated logfile drive. Currently, I'm letting the logfiles go to the same drive array that has the system files on it. One reason is that readily available hard drives are bigger, faster, and cheaper than they used to be; with fewer drives per array, I lose proportionately more drive space to redundancy, but fewer drives mean a longer mean time before one of them goes bad. The other reason is that I've just had a run of bad luck with logfile drive failures. It's probably just coincidence, but most of the drives that have gone bad in the last few years have been the ones dedicated to logfiles. That might not sound so bad at first--it's generally not heartbreaking if your logs get wiped out by a drive failure--but it does mean that the whole system goes down. If one of the drives in a RAID-5 array dies, your server just keeps humming along.

My own web and mail servers that are basically half-size versions of Irys, except that the mail machine uses IDE drives rather than SCSI. (Nyx handles dozens or hundreds of letters per second; I rarely get more than a dozen in a minute.)

Which is just as well; up here in the "control room" where I am most of the time, I have enough trouble keeping up with the email I already get.

Most of the existing x86-based servers are built on the Intel dk440lx "Dakota" board, which has proven to be a good, stable mainboard over the years. For the new servers, however, I've been switching to the Supermicro 370DLR, prompted initially by getting a good deal on a batch of 512Meg registered ECC SDRAM. The catch turned out to be that it was built with 64M4 chips, so it wouldn't work on anything but a Serverworks-based mainboard.

So, I got a stack of 370DLR boards, loaded them up with two gig of memory apiece, and populated most of them with a pair of Pentium-III/850 CPUs and a few with dual PIII/1.1GHz CPUs. This one's the first I'd set up; all the later ones have external drive arrays, which makes maintenance and repair a little easier and faster. I've also been switching over to the AMI/LSI Logic MegaRAID 1500 controllers in the newer machines.

Supermicro 370dlr dual Piii system
Supermicro 370dlr dual Piii system

The systems on the upper levels of the treehouse are designed a little differently than the ones below. For starters, the Nyx machines, being servers of the serious sort all have serious names like "Nyx," "Erebus," "Anubis," "Arachne," and others taken from ancient mythologies.

up in the control room
upstairs in the editing room

Above ground, however, it's a different story. The second story, as it happens, in the picture to the right. The main editing computers are "Kanga" and "Roo," now in their third incarnation. As I get older, my computers keep getting smaller. When I was heading out of my teens, my computer took up half of a closet. The first incarnation of Kanga took up a half-height rack cabinet. Kanga the second fit into a standard mid-tower case plus four twelve-drive Sun 711 ultraSCSI enclosures.

Kanga and Roo Mark three reflect my new heightened desire to be able to work in cooler, quieter surroundings. Back in the "computer that half-filled the closet" days, I managed this very nicely by putting the equipment in the closet and running long RS-232 cables out to the various dumb terminals I was using. (Yes, I've pretty much always had two or three displays of *some* type on my desk.

Kanga Mark III
Inside Kanga (version 3.0)

Now that I've converted the internal network entirely over to gigabit ethernet, thanks to a couple of Dell PowerConnect 2616 16-port unmanaged Gigabit switches, I'm moving in that direction again. I'm not going back to dumb terminals, but thanks to Gigabit ethernet and (if I get around to running the fiber to do it) fibre channel, I can put the main storage down in the basement where it can whirr away to its heart's delight.

So Kanga mark III, my main general-purpose workstation has just a pair of Seagate ST3160023AS 160 Gigabyte serial ATA drives, chosen more for low noise levels than speed. I've even dropped down to a single CPU, in this case an Intel 3.06 Pentium IV with 2 gig of RAM. I'm using a Koolance PC2-601B water-cooled case, but just for the CPU. A pair of low-speed nearly silent fans provides enough airflow to cool the northbridge, the video card (Nvidia Quadro 900XGL), and the hard drives.

Roo Mark III, being dedicated to video editing, can't get away with being quite as minimalistic as Kanga. It's built on a Supermicro X5DAL-G dual Xeon mainboard with 2 gigabytes of dual-channel memory and a pair of 2.66GHz CPUs. The X5DAL-G is pretty minimalistic itself, with next to nothing built in...but you get dual Xeons, 8x AGP, *and* two independent 100/133 MHz PCI-X busses in addition to the usual PCI bus, all in a standard (not extended) ATX form factor.

With no CD/DVD-ROMs or floppy drives, all available drive spaces are taken up by the single 160 Gig boot drive and eight 250 gigabyte hard drives driven off the RaidCORE RC4852 64-bit/133MHz PCI-X controller. Two terabytes of drive space in a reasonably quiet mid-tower-sized box and, according to Sandra, capable of sustained read speeds under Raid 5 around 350 megabytes per second--almost three times the full capacity of the standard PCI bus.

Roo Mark III
Roo (version 3.0)

Besides the RaidCore controller, there's an Nvidia Quadro FX 1100 (not the fastest thing out there, but I'm just doing video editing and graphics here; I've never actually played a computer game), a ViewCast Osprey 2000 DV Pro card for SDI input and encoding, a MOTU (Mark of the Unicorn) 828 Firewire audio interface, and an LSI Logic U160 SCSI host adapter for connecting the DLT drives.




transparent computer case

At the moment, the latest addition to the treehouse is "Jagular," an AMD x2 3800 based machine built on the ASUS AN832-SLI mainboard. Currently I'm using a single XFX GeForce 7800GT for the video card and I have no particular plans to actually install a second to make use of the board's dual x16 SLI capability. I actually decided to use this particular board because of its passive heatpipe cooling system and the lack of noise-generating fans that entails. I figure "quiet" is a good thing for this machine, since its main purpose in life is for high-definition video playback in the theater.

It still does use some conventional cooloing fans, but at least they're all of the large, slow, and quiet variety. I used a Zalman CNPS-9500 copper CPU cooler that's nearly as big as the power supply for the processor and I've mounted a 120mm low-noise fan outside the case on an 80-to-120mm fan adapter. I thought about using one of the new fanless, passively-cooled power supplies, but, since I have to exhaust the heated air from the case interior *anyway*, not having a power supply fan would simply mean that I'd need to produce the same amount of airflow some other way. I might as well spend a bunch less on a supply with a large, quiet fan of its own.


I ended up picking the CoolerMaster RealPower 550 because it's 1) quiet and 2) rated the most efficient across the benchmarks I found on the net. It's not without its drawbacks, though: it's a bit pricey and it's not especially attractive. There are no cable sleeves or anything else that would help make the interior of your case tidy and clean-looking. On the one hand, it does come with a myriad of power connectors suitable for all the common mainboard types so whether you're using a modern Intel-based board, a current model AMD-based board, or a workstation or server EPS-style board, you're set. On the other hand, it comes with a myriad of power connectors suitable for all the common mainboard types, so you have to stick all those extra cables and connectors *somewhere*.


The rest of the systems on the interior subnet have similar names: "Eeyore" is the router, which sits quietly on the edge of the network (as opposed to "Bifrost," the much more serious and dignified router that bridges the gap to the internet); Tigger, a mobile media machine with multiple Pioneer A06 DVD burners used mostly for DVD replication; Piglet, a small-form-factor media server; Owl, a fileserver; and, in the bathroom, there's Pooh, a laptop of relatively little CPU, but still sufficient for typical bath use.

scaling the world wide web
back to
the base of the tree