init
66
ljcom/htdocs/misc/clusterlj/index.html
Normal file
@@ -0,0 +1,66 @@
|
||||
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
|
||||
<html>
|
||||
<head>
|
||||
<title>Clustering LiveJournal</title>
|
||||
</head>
|
||||
|
||||
<body>
|
||||
<h1 align='center'>Clustering LiveJournal</h1>
|
||||
<p align='center'>Brad Fitzpatrick & lj_dev crew</p>
|
||||
|
||||
<h2>Introduction</h2>
|
||||
<p>
|
||||
LiveJournal was originally designed to be used by 10 people, myself
|
||||
and a few friends. Over time the design has been tweaked to let it
|
||||
scale further, but the basic design is still the same. We're fast
|
||||
approaching the time when there's nothing left to optimize, other than
|
||||
the architecture itself. That's what this page is about.
|
||||
</p>
|
||||
|
||||
<h2>Current Architecture</h2>
|
||||
<p>
|
||||
Currently, there is one master database, 5 slave databases, and a ton
|
||||
of web servers. A request comes in to the load balancer where it is
|
||||
then given to the "best" web server. Each web server runs tons of
|
||||
processes which all maintain a database connection to the master and
|
||||
to a slave. All update operations go to the master. Read operations
|
||||
go to a slave, or fall through to the master if the slave is behind.
|
||||
</p>
|
||||
<p>
|
||||
The problem with this setup is that we're evenly dividing our reads
|
||||
between slaves but each slave db has to do every write. Imagine that each database server has <i>t</i> units of time. Further imagine that a write takes 2 units of time and a read takes 1 unit. Say we're doing <i>n</i> reads & writes. Now, let us have <i>s</i> slaves. As <i>n</i> increases, each slave is requiring <i>n*2 + (n/s)</i> units of time. Even if we keep increasing <i>s</i>, the number of slave databases, the real problem is that <i>n*2</i> keeps growing, taking away time those database servers could be serving read requests.
|
||||
</p>
|
||||
<p>
|
||||
Worse, each slave won't have the disk capacity to hold the entire database. Even if it did, though, the bigger problem is that the machines' memory is finite, so if the db size on disk is growing and the memory size is fixed (all our slaves have a 2GB or 4GB limit... only our master can go up to 16GB), then as the on-disk size grows, the cache hit rate drops incredibly fast. Once you're not hitting the cache, things start to suck with a quickness. The speed of a disk seek compared to a fetch from memory is astronomical. Disks suck.
|
||||
</p>
|
||||
<h3>Tricks</h3>
|
||||
<p>
|
||||
Right now, we do some tricks to get by with the above
|
||||
architecture. The largest tables we only replicate (from master to
|
||||
slave) a subset of the data. That's why we have the
|
||||
<tt>recent_logtext</tt> and <tt>recent_talktext</tt> tables. A cron
|
||||
job deletes everything older than 2 weeks from this table every day.
|
||||
The web servers try the recent tables on the slave dbs first, then
|
||||
fall back to using the master tables.
|
||||
</p>
|
||||
<p>
|
||||
The next thing we did was have one database that replicated nothing but the recent tables, then all the web servers had 3 db connections open... text slave, general slave, and master. This improved the cache hits everywhere, since the dbs were now specialized. The general slaves even improved, since they didn't have all that text getting in the way of the selects from the <tt>log</tt> table, notably.
|
||||
</p>
|
||||
|
||||
<h2>The Plan</h2>
|
||||
<p>
|
||||
The plan has undergone modification over time as we refine it.
|
||||
<ul>
|
||||
<li><a href="rev01.html">Revision 1</a></li>
|
||||
<li><a href="rev02.html">Revision 2</a> (assumes you've read/skimmed rev 1)</li>
|
||||
</ul>
|
||||
</p>
|
||||
|
||||
<hr>
|
||||
<address><a href="mailto:bradfitz@livejournal.com">Brad Fitzpatrick</a></address>
|
||||
<!-- Created: Mon Dec 10 15:41:42 PST 2001 -->
|
||||
<!-- hhmts start -->
|
||||
Last modified: Wed Dec 12 09:32:13 PST 2001
|
||||
<!-- hhmts end -->
|
||||
</body>
|
||||
</html>
|
||||
BIN
ljcom/htdocs/misc/clusterlj/ljarch.png
Normal file
|
After Width: | Height: | Size: 4.1 KiB |
201
ljcom/htdocs/misc/clusterlj/rev01.html
Normal file
@@ -0,0 +1,201 @@
|
||||
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
|
||||
<html>
|
||||
<head>
|
||||
<title>Clustering LiveJournal</title>
|
||||
</head>
|
||||
|
||||
<body>
|
||||
<h1 align='center'>Clustering LiveJournal</h1>
|
||||
<p align='center'>Brad Fitzpatrick & lj_dev crew</p>
|
||||
|
||||
<h2>Introduction</h2>
|
||||
<p>
|
||||
The problem is <a href="./">described here</a>. The following solution is a rough draft. Future refinements of this solution are posted below the aforelinked introduction.
|
||||
</p>
|
||||
|
||||
<h2>The Plan; Revision 1</h2>
|
||||
<p>
|
||||
The ultimate ideal would be to have LiveJournal to scale linearly with the number of servers we buy. And that's exactly what this aims to do.
|
||||
</p>
|
||||
<p>
|
||||
The new plan is to have a bunch of indepedent clusters, each with a pool of ~5 web servers and at least 2 database servers. All of a user's data would be confined to one cluster. A new 'clusterid' column in the user table would specify which cluster that user is on. As new users join, we simply build a new cluster and start putting users there, keeping the old clusters running as they were. If people stop using the service, older clusters will free up, so we can put users there.
|
||||
</p>
|
||||
<p>Before I go further, a picture I whipped up in ViSiO:</p>
|
||||
<p align='center'><img src='ljarch.png' width=500 height=300 alt='mad art skills'></p>
|
||||
<p>
|
||||
What's it all mean?
|
||||
<ul>
|
||||
<li><p><b>Cloud</b> -- this is the Internet. It's always drawn as a cloud.</p>
|
||||
|
||||
<li><p><b>Load Balancer</b> -- this is our pair of BIG/ips... it redirects requests places and monitors servers, etc, etc.</p>
|
||||
|
||||
<li><p><b>BH: redir</b> -- the logic to redirect a request to a certain cluster is too much work for the BIG/ip. We'll need to do database lookups and stuff from the given user field, which'll be in any one of 10 places. Writing candidacy functions for mod_backhand is pretty easy and can be written in C or Perl. Actually, these machines can be on any/all of the cluster web slaves.... there's no need to have them be physically seperate, but it makes for a prettier abstraction on the above picture.</p>
|
||||
|
||||
<li><b>Controller Master</b> -- this is the master database, but now it'll hold a whole ton less than it did before. No log info, no journal text, no comment info, no comment text, no journal styles, no memories, etc, etc. The two important things it'll have will be the user table (for maintaining unique usernames and userids) and the userusage table, so all clusters will be able to know when a friend on a different cluster updated their journal. All of this data is replicated down, but it's incredibly light.</p>
|
||||
|
||||
<li><p><b>Cluster Master</b> -- all data about a user that isn't stored on the controller master (site global), will be put on that user's cluster master, where it'll replicate down to the cluster's slave(s).</p>
|
||||
|
||||
<li><p><b>Cluster Slaves</b> -- will only ever need to:<ul><li>write to controller master<li>write to cluster master<li>read from cluster slave (or fallback to cluster master)</ul></p>
|
||||
</ul>
|
||||
|
||||
Time for another picture:
|
||||
</p>
|
||||
<p align='center'><a href="usagepie-large.png"><img src="usagepie-large.png" width=300 height=200 alt='disk usage graph'></a></p>
|
||||
|
||||
See that 1% slice for userinterests? That's the largest thing that'll be stored on the controller master. All the other larger stuff (and a bunch of the <1% slices, even) will only be stored on the cluster masters.
|
||||
|
||||
<h2>Notes</h2>
|
||||
|
||||
<p><b>Maintaining globally unique AUTO values.</b>
|
||||
Without everything in one logical database, the <tt>AUTO_INCREMENT</tt>
|
||||
columns would have no coordination and multiple clusters could use the same
|
||||
unique IDs for posts, comments, pictures, memories, etc. To prevent
|
||||
this, and to make it easier to delete & move users (move is copy + delete), we need to change all these unique primary key auto columns to be dual column primary keys: (userid, autovalue). You just insert the given userid and a NULL for the autovalue and each userid has its own count. That means that people will no longer have itemids with 9 digits and such... everybody's numbers will unique and small to just them.
|
||||
</p>
|
||||
|
||||
<p><b>Legacy ID snapshot</b>. So as to not break links, we'll need a table to map old unique IDs to the (userid, unique) tuples. When these requests are received, the backhander won't know where to throw it, so it'll throw it to any one, which'll then look up its userid, and HTTP redirect it, so the next request (to, say, /talkread.bml?user=test&itemid=32) will know which cluster to assign it to.
|
||||
</p>
|
||||
|
||||
<p><b>Cache benefit.</b> The coolest thing about this is that the
|
||||
caches on every webserver process and every database server will be a
|
||||
lot more valid. If MySQL needs to pull in 10 pages from disk to find
|
||||
4 records, we don't have to worry that all that other data on those
|
||||
pages is worthless. Now we know that those other records we now have
|
||||
sitting in memory are also valid for somebody on our cluster. And
|
||||
because we cap the number of users per cluster, we can ensure that the
|
||||
cache performance of a cluster stays the same over time.
|
||||
</p>
|
||||
|
||||
<p><b>RPC between clusters.</b> Obivously there's going to need to be
|
||||
communication between clusters. We started to play with this already,
|
||||
actually ... just wrap DB calls over HTTP. We'll need this to get
|
||||
things like get_recent_items for friends view and getting the text
|
||||
from everything.
|
||||
</p>
|
||||
<p>
|
||||
It'll be important to request everything that'll be needed to each
|
||||
cluster in one transaction, so as to maintain round trip latencies
|
||||
from serializing a bunch of requests. While the number of clusters is
|
||||
low (~3 - 6) we'll be able to get away with enumerating over the
|
||||
clusters we need to do RPC with and doing the requests one by one.
|
||||
In the future it might be nice to parallelize these requests.
|
||||
</p>
|
||||
<p>Actually, I'm probably worrying about this too much. We already do
|
||||
dozens of DB calls serialized in places. This'll be faster,
|
||||
as each DB call (albeit over HTTP) will execute faster due to better
|
||||
cache hits.</p>
|
||||
<p>
|
||||
Another concern you may be having: <i>"But Brad, won't an increased
|
||||
number of users across all the clusters cause a bunch more RPC
|
||||
requests to each cluster, thus diminishing the quality of a cluster,
|
||||
which this whole plan was supposed to address?"</i> Ah, clever you
|
||||
are. Yes, reads will increase to all clusters as total users and
|
||||
total clusters grow. But remember, reads are easy to load balance..
|
||||
all we have to do is buy more cluster db slaves and spread the
|
||||
reads evenly. So initially we can start with only 1 or 2 cluster
|
||||
slaves per cluster. Once our total number of clusters hits 5-6,
|
||||
it'd be wise to add another db slave to each cluster.
|
||||
</p>
|
||||
|
||||
<p><b>Friends view logic.</b> The <tt>userusage</tt> table is replicated everywhere, so each cluster will know when their friends updated, and which clusters
|
||||
to do RPC on.
|
||||
</p>
|
||||
|
||||
<p><b>Backhander.</b>
|
||||
The backhander will have to look several places in the HTTP request to determine which cluster to throw it to:<ul>
|
||||
<li>REQUEST_URI =~ m!^/(users|community|~)/<b>(\w+)</b>!
|
||||
<li>REQUEST_URI =~ m![\?\&]user=<b>(\w+)</b>!
|
||||
<li>Post data: user
|
||||
</ul>
|
||||
Anything that doesn't match those, we'll need to make sure it does, or add other rules.
|
||||
</p>
|
||||
|
||||
<p><b>Stats aggregation.
|
||||
</b>
|
||||
Each cluster will have to run its own statistics, and then something on mayor wil have to aggregate those.
|
||||
</p>
|
||||
|
||||
<p><b>recent_* tables</b>.
|
||||
Recent tables could and should probably die.
|
||||
</p>
|
||||
|
||||
<p><b>Moving users between clusters</b>.
|
||||
It'll be easy to move users between clusters. Now that all data is prefixed with the itemid, finding it and deleting it is easy. The only issue is locking which isn't too hard. We can use GET/RELEASE_LOCK on the controller master as a global mutex mechanism.
|
||||
</p>
|
||||
<p>
|
||||
When might it be necessary to move users? Imagine we're getting heavy growth and we must temporarily overload the newest cluster while we wait for parts from some slow vendor. Once the new cluster is up, we'll want to move users over. Or, consider that users stop using the site over time. It'd be nice to be able to move users from the busiest cluster over to the ones that are dropping in traffic due to users quitting. etc.., etc...
|
||||
</p>
|
||||
|
||||
<p><b>What's where?</b>
|
||||
The major things, at least:
|
||||
<table align='center' border=1 width=400>
|
||||
<tr>
|
||||
<th>Cluster</th>
|
||||
<th>Everywhere</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
log, logprop, talk, logtext, talktext, userpicblob (userpic images)
|
||||
</td>
|
||||
<td>
|
||||
user, userusuage, generic files, site images
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
This isn't meant as a definitive list. If you want to start hacking on some part (see the implementation plan below), then check with the code@ mailing list to see where the table/service in question will live.
|
||||
</p>
|
||||
|
||||
<p><b>Directory implications.</b>
|
||||
<br>(22:39:01) <i>Foobar</i>: So here's a question that may or may not be relevant: how would the directory fit into this?
|
||||
<br>(22:39:30) <i>Foobar</i>: just query each cluster and aggregate and sort them after they all return?
|
||||
<br>(22:39:35) <i>brad</i>: basically, yeah
|
||||
<br>(22:39:53) <i>Foobar</i>: sounds potentially ugly, but I guess it's workable
|
||||
</p>
|
||||
|
||||
<p><b>What cluster do new users go to?</b>
|
||||
The least loaded/newest cluster.
|
||||
</p>
|
||||
|
||||
<p><b>Are there still free/paid servers?</b>
|
||||
We <i>could</i> have a paid cluster, but it seems quite silly, since that paid cluster would have to RPC over to the free clusters. Plus, having a paid cluster would involve moving tons of people each day both back and forth. So the plan is to <b>NOT</b> have seperate clusters, and just make everything incredibly fast. If bandwidth gets too expensive, we could force free users to have an obligatory 30ms latency, which would still be about 3870ms better than what it is now. Please, don't compain about this ... we have no obligation to give away infinite CPU and bandwidth to everybody for free. We realize that free users constitute the overwhelming majority, so free service will always be good (that's our goal), but we want to be able to give paid users a little bit extra always, so if we artificially limit the speed of free access while lower our costs, so be it.
|
||||
</p>
|
||||
<p>
|
||||
But won't this bring down paid account sales, if free servers are fast enough? Perhaps, but who cares... having shitty service is a poor excuse to profit. We'll make up for the lost sales due to fast servers by offering all the features we've been saying we're going to do forever. Hell, a bunch of them are already 80% written but we've been too busy keeping the site alive. Once the site is permanently alive we can focus on spending time writing fun stuff instead.
|
||||
</p>
|
||||
|
||||
<p><b>So now the BIG/ips don't do much, huh?</b> Yeah, not quite as much. Right now we have a huge ruleset that gets run on the BIG/ip for each request. That'd be simplified quite a bit and the mod_backhand code will do the work now.
|
||||
</p>
|
||||
|
||||
|
||||
<h2>Implementation Plan</h2>
|
||||
<p>
|
||||
This is a lot of tedious work, but it's all very trivial. Luckily though, it's high parallelizable.
|
||||
</p>
|
||||
<ul>
|
||||
<li><p><b>Unique ID Split.</b> The first thing that needs to happen is splitting all the unique IDs into (userid, unique) tuples. We can and should put this live after testing before we do the rest. The side benefit is that we'll be able to delete users incredibly easy then, so we'll be able to delete a lot of data before we later move everybody onto their clusters.</p>
|
||||
|
||||
<li><p><b>Backhander.</b> We need to write the backhander candidacy function. It might be easiest to hire a backhand guru to do it. I know two people that'd probably be down. Otherwise it shouldn't be too hard.</p>
|
||||
|
||||
<li><p><b>clusterid column.</b> We need to add the clusterid column to the user table, set to 0 for everybody inititally. 0 will mean "the big monolithic cluster", which is how much LJ sites will run. I haven't decided yet if we'll need to special-case 0 to mean no cluster (on the old system) or if it'll just be another cluster, much larger than the others at first.</p>
|
||||
|
||||
<li><p><b>RPC code.</b> Any code that depends on accessing data from table for a userid that doesn't exist on that cluster will need to be rewritten to do RPC to the appropriate cluster. The main place is friends views. There are a ton of smaller areas, but to begin with we'll replicate a bunch of the <1% slice tables, even though they could later be cluster-only, just to make our lives easier at first.</p>
|
||||
|
||||
<li><p><b>Ton of testing.</b> We'll need to run test transitions over and over until we're sure it's perfect. I'll be setting up a few machines to simulate different clusters (each with web & db server).</p>
|
||||
</ul>
|
||||
|
||||
<h1>Conclusion</h1>
|
||||
<p>
|
||||
It's time to get serious. I'm sick of dumb hacks. All those dumb hacks were nice, and a large number of them will still be applicable and carry over and benefit us in the new code, but the root problem (dbs sucking over time) needs to be solved.
|
||||
</p>
|
||||
<p>
|
||||
Please help me out with this. I can't wait until we can just buy 4-6 more machines and put a new cluster online, letting us grow with diminishing the quality of service for the other clusters. I can't wait until I can spend my time programming fun new features instead of just keeping the site alive.
|
||||
</p>
|
||||
|
||||
<hr>
|
||||
<address><a href="mailto:bradfitz@livejournal.com">Brad Fitzpatrick</a></address>
|
||||
<!-- Created: Mon Dec 10 15:41:42 PST 2001 -->
|
||||
<!-- hhmts start -->
|
||||
Last modified: Mon Jan 21 19:34:50 PST 2002
|
||||
<!-- hhmts end -->
|
||||
</body>
|
||||
</html>
|
||||
90
ljcom/htdocs/misc/clusterlj/rev02.html
Normal file
@@ -0,0 +1,90 @@
|
||||
<html>
|
||||
|
||||
<head><title>Clustering LiveJournal Take 2</title></head>
|
||||
|
||||
<body>
|
||||
|
||||
<h2>Differences from Revision 1</h2>
|
||||
<p>The following are the major differences from <a href="rev01.html">revision 1</a> of our clustering plan.
|
||||
</p>
|
||||
|
||||
<h3>No clustering of web slaves; no backhand redirection</h3>
|
||||
<p>
|
||||
This is the main difference. We're only clustering databases.
|
||||
This means we don't need the backhand redirector machines to look at URIs
|
||||
and redirect requests to the right pool of webslaves. And this also means
|
||||
we can still have premium faster paid servers.
|
||||
</p>
|
||||
|
||||
<h3>No RPC between clusters</h3>
|
||||
<p>
|
||||
Each webslave will talk directly to the DB it needs to using DBI,
|
||||
rather than doing some HTTP wrapper kludge. The point of the RPC
|
||||
wrapper before was to prevent the cluster master DBs from having five
|
||||
billion connections from a half billion web slaves. But really, MySQL
|
||||
handles insane numbers of connections anyway (if thy're mostly idle,
|
||||
as they will be). If we need to serialize requests later between a
|
||||
smaller number of db connections, we'll just do that, making each
|
||||
machine have a pool of connections they have to share.
|
||||
</p>
|
||||
<p>
|
||||
Why all idle you say? Well, web slaves continues to grow over time,
|
||||
but we limit the number of users/traffic/load per db cluster. So
|
||||
divide. Each web slave will eventually get a master connection, and
|
||||
the master traffic is fixed, so over time, a smaller number of those
|
||||
connections will be active. When it gets too extreme, we either
|
||||
cluster web slaves or make the DB connection pool. But we can deal
|
||||
with this later. Both solutions are easy enough, but they're boring
|
||||
to care about now.
|
||||
</p>
|
||||
|
||||
<h3>Cluster Tables</h3>
|
||||
|
||||
Tables that can be found on each cluster are as follows:
|
||||
|
||||
<code>
|
||||
<ul>
|
||||
<li>talk2</li>
|
||||
<li>talktext2</li>
|
||||
<li>talkprop2</li>
|
||||
<li>log2</li>
|
||||
<li>logsec2</li>
|
||||
<li>logtext2</li>
|
||||
<li>logsubject2</li>
|
||||
<li>logprop2</li>
|
||||
<li>syncupdates2</li>
|
||||
<li>userbio</li>
|
||||
<li>talkleft</li>
|
||||
</ul>
|
||||
</code>
|
||||
|
||||
These tables will replace the tables on the original master server with the
|
||||
similar names (i.e. without the 2 appended.) <code>userbio</code> is the same.
|
||||
|
||||
<p>Currently, a user can conceivably be on either the original master database
|
||||
or in one of the myriad clusters available. To detect what the case may be,
|
||||
examine the <code>clusterid</code> element of the user's entry in the <code>user</code>
|
||||
table. If <code>clusterid == 0</code> then the user is located on the old
|
||||
master database and their data needs to be loaded from the old tables; otherwise,
|
||||
the data is located on cluster #<code>clusterid</code> using the new table names
|
||||
above.</p>
|
||||
|
||||
<p>For future expansion, the element <code>dversion</code> is also added to
|
||||
the <code>user</code> table. If <code>dversion == 0</code> then the user is not
|
||||
on a cluster, i.e. they're on the original master system. <code>dversion == 1</code>
|
||||
implies that the user is located on a cluster. As more of the user's data is
|
||||
moved to the clusters, dversion will increase. Note that any dversion >= 1 means
|
||||
the user is on a cluster. The plan is for higher dversion numbers to indicate
|
||||
that more per user data is moved from the original setup to the clustering system.</p>
|
||||
|
||||
<p>Conversion from dversion 0 to dversion 1 will be a lazy conversion. This
|
||||
involves the READ_ONLY capability code. Basically, the user's READ_ONLY
|
||||
capability bit will be set and then the code will pause for a minute or
|
||||
two to allow any pending transactions to go through. After this, all
|
||||
data will be copied from the old database system into the appropriate
|
||||
cluster. After everything is copied, the data is deleted from the old
|
||||
system and the users's READ_ONLY capability bit is toggled off.</p>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
||||
BIN
ljcom/htdocs/misc/clusterlj/usagepie-large.png
Normal file
|
After Width: | Height: | Size: 11 KiB |
20
ljcom/htdocs/misc/goals.html
Normal file
@@ -0,0 +1,20 @@
|
||||
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
|
||||
<html>
|
||||
<head>
|
||||
<title>LiveJournal.com Goals</title>
|
||||
</head>
|
||||
|
||||
<body>
|
||||
<h1>LiveJournal.com Goals</h1>
|
||||
|
||||
<p>
|
||||
Every so often make a graph visualizing where we are and what we need to do.
|
||||
</p>
|
||||
|
||||
<ul>
|
||||
<li><a href="goals/goals-20020605.html">2002-06-05</a></li>
|
||||
<li><a href="goals/goals-20020209.html">2002-02-09</a></li>
|
||||
</ul>
|
||||
|
||||
</body>
|
||||
</html>
|
||||
BIN
ljcom/htdocs/misc/goals/goals-20020209.gif
Normal file
|
After Width: | Height: | Size: 147 KiB |
1
ljcom/htdocs/misc/goals/goals-20020209.html
Normal file
@@ -0,0 +1 @@
|
||||
<a href="/misc/goals.html">Goals</a> as of <i>2002-02-09</i><p><img src='goals-20020209.gif'>
|
||||
BIN
ljcom/htdocs/misc/goals/goals-20020605.gif
Normal file
|
After Width: | Height: | Size: 128 KiB |
1
ljcom/htdocs/misc/goals/goals-20020605.html
Normal file
@@ -0,0 +1 @@
|
||||
<a href="/misc/goals.html">Goals</a> as of: <i>2002-06-05</i><p><img src='goals-20020605.gif'>
|
||||
1
ljcom/htdocs/misc/index.html
Normal file
@@ -0,0 +1 @@
|
||||
<!-- Empty -->
|
||||
BIN
ljcom/htdocs/misc/ljlogo/lj.ico
Normal file
|
After Width: | Height: | Size: 5.1 KiB |
108
ljcom/htdocs/misc/ljlogo/lj_logo.svg
Normal file
@@ -0,0 +1,108 @@
|
||||
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.0//EN"
|
||||
"http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd">
|
||||
<!-- Created with Inkscape (http://www.inkscape.org/) -->
|
||||
<svg
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="1.0"
|
||||
x="0.0000000"
|
||||
y="0.0000000"
|
||||
width="535.62500"
|
||||
height="625.00000"
|
||||
id="svg837">
|
||||
<defs
|
||||
id="defs839" />
|
||||
<g
|
||||
transform="matrix(1.587978,0.000000,0.000000,1.587978,-88.94782,31.10516)"
|
||||
style="fill:#336699;"
|
||||
id="g889">
|
||||
<path
|
||||
d="M 350.26100,40.923500 C 311.73200,-4.3894700 223.95800,12.107100 154.21100,77.771100 C 116.17600,113.58100 91.360400,156.61600 82.903800,195.69700 C 83.529300,195.65400 84.152400,195.61600 84.777800,195.57800 C 92.993200,158.87200 116.43200,118.77200 152.24500,85.504500 C 218.68400,23.790700 302.52300,8.9665500 339.50700,52.395200 C 365.01000,82.353200 361.55700,132.20000 335.16200,180.49800 C 335.10400,180.52400 335.04200,180.55000 334.98700,180.57400 C 319.57600,209.31000 299.10600,229.47800 278.28900,247.60100 C 226.74700,292.47000 160.55100,295.99800 130.83200,261.02300 C 117.07400,244.83200 112.82500,222.33400 116.87900,197.87500 C 122.30400,165.15600 142.59700,128.92100 174.94000,99.550000 C 182.13200,93.019700 189.57600,87.160300 197.15100,81.954700 C 243.05500,51.879200 294.46300,48.328900 317.76000,75.731600 C 343.73900,106.29100 324.93200,163.76000 275.75900,204.17200 C 275.75200,204.17800 272.57500,206.68500 271.73200,207.45700 L 264.98200,212.89400 L 266.59500,215.43900 L 278.65600,205.98800 L 278.64100,205.97600 C 279.52000,205.27100 280.39800,204.57000 281.26700,203.83900 C 333.11300,160.31200 353.06500,99.097800 325.83500,67.116500 C 320.42800,60.765300 313.54600,56.072900 305.61400,52.926500 C 305.65100,52.892800 305.68400,52.856100 305.72100,52.820600 C 305.16000,52.635400 304.57900,52.479200 304.00900,52.307900 C 279.04900,43.233500 244.12800,49.175000 210.59000,68.466000 C 195.91600,76.573400 181.40600,87.109000 167.78800,99.985500 C 137.11200,128.99300 117.06600,164.01800 109.84400,196.50600 C 103.31700,225.86900 107.26400,253.15900 123.33400,272.06100 C 155.26900,309.61600 222.70400,301.74600 279.37500,254.23200 C 279.41800,254.40400 289.37500,245.97200 293.73500,241.86400 C 363.48200,176.20000 388.78900,86.235500 350.26100,40.923500 z "
|
||||
style="stroke-width:0.25000000;"
|
||||
id="path890" />
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(1.587978,0.000000,0.000000,1.587978,-88.94779,31.10516)"
|
||||
id="g891">
|
||||
<g
|
||||
id="g892">
|
||||
<path
|
||||
d="M 123.53100,83.185200 L 108.73600,63.385400 L 130.89400,32.310700 L 164.65100,17.977200 L 181.18600,40.107100 L 146.60700,52.936200 L 123.53100,83.185200 z "
|
||||
style="fill:#cccccc;"
|
||||
id="path893" />
|
||||
<path
|
||||
d="M 171.09600,44.603200 L 153.13400,53.862100 L 135.74500,30.518700 L 153.70600,21.259900 L 171.09600,44.603200 z "
|
||||
style="fill:#ffffff;"
|
||||
id="path894" />
|
||||
<path
|
||||
d="M 125.94800,86.824500 L 146.35700,57.056300 L 200.16800,129.07500 L 199.10700,153.16000 L 189.81600,171.89900 L 125.94800,86.824500 z "
|
||||
style="fill:#6699cc;"
|
||||
id="path895" />
|
||||
<path
|
||||
d="M 181.23000,42.191700 L 242.58400,124.30500 L 220.71100,133.08600 L 200.16800,129.07500 L 146.35700,57.056300 L 181.23000,42.191700 z "
|
||||
style="fill:#99ccff;"
|
||||
id="path896" />
|
||||
<path
|
||||
d="M 106.56400,59.262000 L 129.82900,31.291300 L 162.71300,17.004500 L 152.70600,3.6100500 L 118.85500,16.197900 L 97.910200,47.274700"
|
||||
style="fill:#ff9999;"
|
||||
id="path897" />
|
||||
<path
|
||||
d="M 257.75800,202.50600 L 243.55500,123.57900 L 220.71100,133.08600 L 200.16800,129.07500 L 199.10700,153.16000 L 189.81600,171.89900"
|
||||
style="fill:#ffcc99;"
|
||||
id="path898" />
|
||||
<path
|
||||
d="M 266.16900,214.37200 L 254.66400,124.05000 L 167.31200,7.3693200 C 158.54000,-4.3700000 135.26400,-1.8088400 115.32300,13.091600 C 95.381800,27.990900 86.327600,49.586100 95.098600,61.325800 L 182.56100,178.38100 L 241.60300,203.79700 C 242.18900,204.12200 242.80900,204.39000 243.46100,204.59700 L 266.16900,214.37200 z M 249.98200,181.02300 C 247.70200,181.48100 245.39100,182.48000 243.28800,184.05100 C 241.18700,185.62100 239.57500,187.55100 238.49000,189.60300 L 194.08100,169.48000 C 197.40000,164.30200 206.87500,148.94100 203.86800,133.41900 C 218.63400,139.25000 235.13600,133.19500 241.55300,128.40300 L 242.37800,129.51000 L 249.98200,181.02300 z M 112.79600,63.553400 C 115.80200,54.522600 123.43700,44.438700 134.40300,36.244300 C 144.10200,28.996700 154.48700,24.656400 163.34000,23.524700 L 174.42100,38.383100 C 165.38100,39.335600 154.62300,43.733200 144.60500,51.218400 C 134.01700,59.129200 126.50500,68.809200 123.29000,77.596900 L 112.79600,63.553400 z M 238.33400,124.09000 C 229.22300,131.99200 205.29100,130.79500 200.35700,125.38000 L 151.14000,59.508500 C 151.30200,59.384900 151.45300,59.255000 151.61800,59.132000 C 161.26900,51.920700 171.60000,47.590000 180.42300,46.431800 L 238.33400,124.09000 z M 147.32700,62.605200 L 196.42600,128.31700 C 196.42600,128.31700 203.91900,147.50300 189.48000,166.18200 L 129.97600,86.546000 C 132.55000,78.722800 138.62400,70.112100 147.32700,62.605200 z M 122.28100,22.404400 C 131.49200,15.523100 148.93900,10.151100 156.79400,15.547100 C 147.83900,16.588600 137.25700,20.958300 127.39100,28.330700 C 116.93100,36.145800 109.47900,45.685700 106.19900,54.387500 C 101.70500,44.936200 112.46400,29.739900 122.28100,22.404400 z "
|
||||
style="fill:#003366;"
|
||||
id="path899" />
|
||||
</g>
|
||||
</g>
|
||||
<path
|
||||
d="M 1.9226074e-06,552.13143 L 1.9226074e-06,606.41943 L 21.960002,606.41943 L 21.960002,596.33943 L 10.728002,596.33943 L 10.728002,552.13143 L 1.9226074e-06,552.13143 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#003366;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path965" />
|
||||
<path
|
||||
d="M 48.168002,565.73943 C 51.696002,565.73943 54.648002,562.78743 54.648002,559.33143 C 54.648002,555.87543 51.912002,552.77943 48.384002,552.77943 C 44.712002,552.77943 41.760002,555.51543 41.760002,559.18743 C 41.760002,562.78743 44.640002,565.73943 48.168002,565.73943 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#6699cc;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path964" />
|
||||
<path
|
||||
d="M 43.128002,570.05943 L 43.128002,606.41943 L 53.280002,606.41943 L 53.280002,570.05943 L 43.128002,570.05943 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#6699cc;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path963" />
|
||||
<path
|
||||
d="M 73.944002,552.13143 L 89.640002,606.41943 L 98.424002,606.41943 L 113.47200,552.13143 L 102.24000,552.13143 L 95.688002,578.05143 C 94.824002,581.21943 94.608002,584.31543 94.032002,587.48343 L 93.888002,587.48343 C 93.240002,584.38743 92.880002,581.29143 91.944002,578.19543 L 85.464002,552.13143 L 73.944002,552.13143 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#003366;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path962" />
|
||||
<path
|
||||
d="M 135.50400,552.13143 L 135.50400,606.41943 L 157.24800,606.41943 L 157.24800,596.33943 L 146.23200,596.33943 L 146.23200,584.09943 L 155.80800,584.09943 L 155.80800,574.01943 L 146.23200,574.01943 L 146.23200,562.21143 L 157.10400,562.21143 L 157.10400,552.13143 L 135.50400,552.13143 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#003366;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path961" />
|
||||
<path
|
||||
d="M 199.44000,552.13143 L 188.71200,552.13143 L 188.71200,588.85143 C 188.71200,592.45143 189.07200,597.92343 184.03200,597.92343 C 181.80000,597.92343 179.71200,596.62743 178.41600,594.89943 L 178.41600,605.62743 C 180.72000,606.85143 183.81600,607.28343 186.48000,607.28343 C 199.94400,607.28343 199.44000,593.31543 199.44000,588.85143 L 199.44000,552.13143 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#003366;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path960" />
|
||||
<path
|
||||
d="M 283.60800,552.13143 L 283.60800,592.45143 C 283.60800,602.60343 289.58400,607.28343 299.59200,607.28343 C 315.07200,607.28343 315.72000,597.49143 315.72000,591.73143 L 315.72000,552.13143 L 304.99200,552.13143 L 304.99200,589.57143 C 304.92000,593.60343 304.92000,597.56343 299.66400,597.56343 C 293.83200,597.56343 294.33600,591.37143 294.33600,587.26743 L 294.33600,552.13143 L 283.60800,552.13143 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#003366;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path957" />
|
||||
<path
|
||||
d="M 396.50400,552.13143 L 396.50400,606.41943 L 407.23200,606.41943 L 407.23200,581.50743 L 407.16000,579.27543 L 406.65600,573.80343 L 406.80000,573.65943 L 420.04800,606.41943 L 430.20000,606.41943 L 430.20000,552.13143 L 419.47200,552.13143 L 419.47200,576.53943 C 419.47200,579.34743 419.68800,582.15543 420.33600,584.74743 L 420.19200,584.89143 L 406.80000,552.13143 L 396.50400,552.13143 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#003366;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path954" />
|
||||
<path
|
||||
d="M 511.05600,552.13143 L 511.05600,606.41943 L 533.01600,606.41943 L 533.01600,596.33943 L 521.78400,596.33943 L 521.78400,552.13143 L 511.05600,552.13143 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#003366;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path951" />
|
||||
<path
|
||||
d="M 241.50000,551.28125 C 231.92399,551.28122 222.40625,558.90525 222.40625,579.28125 C 222.40625,599.65723 231.92400,607.28125 241.50000,607.28125 C 251.07599,607.28123 260.56250,599.65725 260.56250,579.28125 C 260.56251,558.90523 251.07600,551.28125 241.50000,551.28125 z M 241.50000,561.21875 C 248.62799,561.21875 249.12500,574.45725 249.12500,579.28125 C 249.12500,583.60123 248.62800,597.34375 241.50000,597.34375 C 234.37199,597.34377 233.84375,583.60125 233.84375,579.28125 C 233.84375,574.45723 234.37200,561.21875 241.50000,561.21875 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#003366;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path983" />
|
||||
<path
|
||||
d="M 340.71875,552.12500 L 340.71875,606.40625 L 351.43750,606.40625 L 351.43750,581.37500 L 351.56250,581.37500 L 361.00000,606.40625 L 372.31250,606.40625 L 362.87500,581.71875 C 368.20300,578.76673 370.59375,573.66350 370.59375,567.68750 C 370.59374,552.85552 358.34000,552.12500 351.50000,552.12500 L 340.71875,552.12500 z M 351.43750,561.00000 L 352.87500,561.00000 C 358.27499,560.71200 360.15625,563.86200 360.15625,567.75000 C 360.15624,572.21400 358.20850,575.53125 353.31250,575.53125 L 351.43750,575.46875 L 351.43750,561.00000 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#003366;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path999" />
|
||||
<path
|
||||
d="M 464.75000,552.12500 L 452.15625,606.40625 L 462.81250,606.40625 L 464.75000,597.28125 L 475.62500,597.28125 L 477.65625,606.40625 L 489.09375,606.40625 L 476.06250,552.12500 L 464.75000,552.12500 z M 469.93750,566.09375 L 470.09375,566.09375 L 471.31250,574.31250 L 474.06250,588.78125 L 466.50000,588.78125 L 468.78125,574.31250 L 469.93750,566.09375 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#003366;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path1000" />
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 11 KiB |
118
ljcom/htdocs/misc/ljlogo/lj_logo_bw.svg
Normal file
@@ -0,0 +1,118 @@
|
||||
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.0//EN"
|
||||
"http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd">
|
||||
<!-- Created with Inkscape (http://www.inkscape.org/) -->
|
||||
<svg
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="1.0"
|
||||
x="0.0000000"
|
||||
y="0.0000000"
|
||||
width="535.62500"
|
||||
height="625.00000"
|
||||
id="svg837">
|
||||
<defs
|
||||
id="defs839">
|
||||
<linearGradient
|
||||
id="linearGradient865">
|
||||
<stop
|
||||
style="stop-color:#000000;stop-opacity:1.0000000;"
|
||||
offset="0.0000000"
|
||||
id="stop866" />
|
||||
<stop
|
||||
style="stop-color:#ffffff;stop-opacity:1.0000000;"
|
||||
offset="1.0000000"
|
||||
id="stop867" />
|
||||
</linearGradient>
|
||||
<linearGradient
|
||||
id="linearGradient868"
|
||||
xlink:href="#linearGradient865" />
|
||||
</defs>
|
||||
<g
|
||||
transform="matrix(1.587978,0.000000,0.000000,1.587978,-88.94782,31.10516)"
|
||||
style="fill:#336699;"
|
||||
id="g889">
|
||||
<path
|
||||
d="M 350.26100,40.923500 C 311.73200,-4.3894700 223.95800,12.107100 154.21100,77.771100 C 116.17600,113.58100 91.360400,156.61600 82.903800,195.69700 C 83.529300,195.65400 84.152400,195.61600 84.777800,195.57800 C 92.993200,158.87200 116.43200,118.77200 152.24500,85.504500 C 218.68400,23.790700 302.52300,8.9665500 339.50700,52.395200 C 365.01000,82.353200 361.55700,132.20000 335.16200,180.49800 C 335.10400,180.52400 335.04200,180.55000 334.98700,180.57400 C 319.57600,209.31000 299.10600,229.47800 278.28900,247.60100 C 226.74700,292.47000 160.55100,295.99800 130.83200,261.02300 C 117.07400,244.83200 112.82500,222.33400 116.87900,197.87500 C 122.30400,165.15600 142.59700,128.92100 174.94000,99.550000 C 182.13200,93.019700 189.57600,87.160300 197.15100,81.954700 C 243.05500,51.879200 294.46300,48.328900 317.76000,75.731600 C 343.73900,106.29100 324.93200,163.76000 275.75900,204.17200 C 275.75200,204.17800 272.57500,206.68500 271.73200,207.45700 L 264.98200,212.89400 L 266.59500,215.43900 L 278.65600,205.98800 L 278.64100,205.97600 C 279.52000,205.27100 280.39800,204.57000 281.26700,203.83900 C 333.11300,160.31200 353.06500,99.097800 325.83500,67.116500 C 320.42800,60.765300 313.54600,56.072900 305.61400,52.926500 C 305.65100,52.892800 305.68400,52.856100 305.72100,52.820600 C 305.16000,52.635400 304.57900,52.479200 304.00900,52.307900 C 279.04900,43.233500 244.12800,49.175000 210.59000,68.466000 C 195.91600,76.573400 181.40600,87.109000 167.78800,99.985500 C 137.11200,128.99300 117.06600,164.01800 109.84400,196.50600 C 103.31700,225.86900 107.26400,253.15900 123.33400,272.06100 C 155.26900,309.61600 222.70400,301.74600 279.37500,254.23200 C 279.41800,254.40400 289.37500,245.97200 293.73500,241.86400 C 363.48200,176.20000 388.78900,86.235500 350.26100,40.923500 z "
|
||||
style="fill:#000000;stroke-width:0.25000000;"
|
||||
id="path890" />
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(1.587978,0.000000,0.000000,1.587978,-88.94779,31.10516)"
|
||||
id="g891">
|
||||
<g
|
||||
id="g892">
|
||||
<path
|
||||
d="M 123.53100,83.185200 L 108.73600,63.385400 L 130.89400,32.310700 L 164.65100,17.977200 L 181.18600,40.107100 L 146.60700,52.936200 L 123.53100,83.185200 z "
|
||||
style="fill:#ffffff;"
|
||||
id="path893" />
|
||||
<path
|
||||
d="M 125.94800,86.824500 L 146.35700,57.056300 L 200.16800,129.07500 L 199.10700,153.16000 L 189.81600,171.89900 L 125.94800,86.824500 z "
|
||||
style="fill:#ffffff;"
|
||||
id="path895" />
|
||||
<path
|
||||
d="M 181.23000,42.191700 L 242.58400,124.30500 L 220.71100,133.08600 L 200.16800,129.07500 L 146.35700,57.056300 L 181.23000,42.191700 z "
|
||||
style="fill:#ffffff;"
|
||||
id="path896" />
|
||||
<path
|
||||
d="M 106.56400,59.262000 L 129.82900,31.291300 L 162.71300,17.004500 L 152.70600,3.6100500 L 118.85500,16.197900 L 97.910200,47.274700"
|
||||
style="fill:#ffffff;"
|
||||
id="path897" />
|
||||
<path
|
||||
d="M 257.75800,202.50600 L 243.55500,123.57900 L 220.71100,133.08600 L 200.16800,129.07500 L 199.10700,153.16000 L 189.81600,171.89900"
|
||||
style="fill:#ffffff;"
|
||||
id="path898" />
|
||||
<path
|
||||
d="M 266.16900,214.37200 L 254.66400,124.05000 L 167.31200,7.3693200 C 158.54000,-4.3700000 135.26400,-1.8088400 115.32300,13.091600 C 95.381800,27.990900 86.327600,49.586100 95.098600,61.325800 L 182.56100,178.38100 L 241.60300,203.79700 C 242.18900,204.12200 242.80900,204.39000 243.46100,204.59700 L 266.16900,214.37200 z M 249.98200,181.02300 C 247.70200,181.48100 245.39100,182.48000 243.28800,184.05100 C 241.18700,185.62100 239.57500,187.55100 238.49000,189.60300 L 194.08100,169.48000 C 197.40000,164.30200 206.87500,148.94100 203.86800,133.41900 C 218.63400,139.25000 235.13600,133.19500 241.55300,128.40300 L 242.37800,129.51000 L 249.98200,181.02300 z M 112.79600,63.553400 C 115.80200,54.522600 123.43700,44.438700 134.40300,36.244300 C 144.10200,28.996700 154.48700,24.656400 163.34000,23.524700 L 174.42100,38.383100 C 165.38100,39.335600 154.62300,43.733200 144.60500,51.218400 C 134.01700,59.129200 126.50500,68.809200 123.29000,77.596900 L 112.79600,63.553400 z M 238.33400,124.09000 C 229.22300,131.99200 205.29100,130.79500 200.35700,125.38000 L 151.14000,59.508500 C 151.30200,59.384900 151.45300,59.255000 151.61800,59.132000 C 161.26900,51.920700 171.60000,47.590000 180.42300,46.431800 L 238.33400,124.09000 z M 147.32700,62.605200 L 196.42600,128.31700 C 196.42600,128.31700 203.91900,147.50300 189.48000,166.18200 L 129.97600,86.546000 C 132.55000,78.722800 138.62400,70.112100 147.32700,62.605200 z M 122.28100,22.404400 C 131.49200,15.523100 148.93900,10.151100 156.79400,15.547100 C 147.83900,16.588600 137.25700,20.958300 127.39100,28.330700 C 116.93100,36.145800 109.47900,45.685700 106.19900,54.387500 C 101.70500,44.936200 112.46400,29.739900 122.28100,22.404400 z "
|
||||
id="path899" />
|
||||
</g>
|
||||
</g>
|
||||
<path
|
||||
d="M 1.9226074e-06,552.13143 L 1.9226074e-06,606.41943 L 21.960002,606.41943 L 21.960002,596.33943 L 10.728002,596.33943 L 10.728002,552.13143 L 1.9226074e-06,552.13143 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#110000;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path965" />
|
||||
<path
|
||||
d="M 48.168002,565.73943 C 51.696002,565.73943 54.648002,562.78743 54.648002,559.33143 C 54.648002,555.87543 51.912002,552.77943 48.384002,552.77943 C 44.712002,552.77943 41.760002,555.51543 41.760002,559.18743 C 41.760002,562.78743 44.640002,565.73943 48.168002,565.73943 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#110000;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path964" />
|
||||
<path
|
||||
d="M 43.128002,570.05943 L 43.128002,606.41943 L 53.280002,606.41943 L 53.280002,570.05943 L 43.128002,570.05943 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#110000;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path963" />
|
||||
<path
|
||||
d="M 73.944002,552.13143 L 89.640002,606.41943 L 98.424002,606.41943 L 113.47200,552.13143 L 102.24000,552.13143 L 95.688002,578.05143 C 94.824002,581.21943 94.608002,584.31543 94.032002,587.48343 L 93.888002,587.48343 C 93.240002,584.38743 92.880002,581.29143 91.944002,578.19543 L 85.464002,552.13143 L 73.944002,552.13143 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#110000;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path962" />
|
||||
<path
|
||||
d="M 135.50400,552.13143 L 135.50400,606.41943 L 157.24800,606.41943 L 157.24800,596.33943 L 146.23200,596.33943 L 146.23200,584.09943 L 155.80800,584.09943 L 155.80800,574.01943 L 146.23200,574.01943 L 146.23200,562.21143 L 157.10400,562.21143 L 157.10400,552.13143 L 135.50400,552.13143 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#110000;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path961" />
|
||||
<path
|
||||
d="M 199.44000,552.13143 L 188.71200,552.13143 L 188.71200,588.85143 C 188.71200,592.45143 189.07200,597.92343 184.03200,597.92343 C 181.80000,597.92343 179.71200,596.62743 178.41600,594.89943 L 178.41600,605.62743 C 180.72000,606.85143 183.81600,607.28343 186.48000,607.28343 C 199.94400,607.28343 199.44000,593.31543 199.44000,588.85143 L 199.44000,552.13143 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#110000;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path960" />
|
||||
<path
|
||||
d="M 283.60800,552.13143 L 283.60800,592.45143 C 283.60800,602.60343 289.58400,607.28343 299.59200,607.28343 C 315.07200,607.28343 315.72000,597.49143 315.72000,591.73143 L 315.72000,552.13143 L 304.99200,552.13143 L 304.99200,589.57143 C 304.92000,593.60343 304.92000,597.56343 299.66400,597.56343 C 293.83200,597.56343 294.33600,591.37143 294.33600,587.26743 L 294.33600,552.13143 L 283.60800,552.13143 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#110000;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path957" />
|
||||
<path
|
||||
d="M 396.50400,552.13143 L 396.50400,606.41943 L 407.23200,606.41943 L 407.23200,581.50743 L 407.16000,579.27543 L 406.65600,573.80343 L 406.80000,573.65943 L 420.04800,606.41943 L 430.20000,606.41943 L 430.20000,552.13143 L 419.47200,552.13143 L 419.47200,576.53943 C 419.47200,579.34743 419.68800,582.15543 420.33600,584.74743 L 420.19200,584.89143 L 406.80000,552.13143 L 396.50400,552.13143 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#110000;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path954" />
|
||||
<path
|
||||
d="M 511.05600,552.13143 L 511.05600,606.41943 L 533.01600,606.41943 L 533.01600,596.33943 L 521.78400,596.33943 L 521.78400,552.13143 L 511.05600,552.13143 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#110000;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path951" />
|
||||
<path
|
||||
d="M 241.50000,551.28125 C 231.92399,551.28122 222.40625,558.90525 222.40625,579.28125 C 222.40625,599.65723 231.92400,607.28125 241.50000,607.28125 C 251.07599,607.28123 260.56250,599.65725 260.56250,579.28125 C 260.56251,558.90523 251.07600,551.28125 241.50000,551.28125 z M 241.50000,561.21875 C 248.62799,561.21875 249.12500,574.45725 249.12500,579.28125 C 249.12500,583.60123 248.62800,597.34375 241.50000,597.34375 C 234.37199,597.34377 233.84375,583.60125 233.84375,579.28125 C 233.84375,574.45723 234.37200,561.21875 241.50000,561.21875 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#110000;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path869" />
|
||||
<path
|
||||
d="M 340.71875,552.12500 L 340.71875,606.40625 L 351.43750,606.40625 L 351.43750,581.37500 L 351.56250,581.37500 L 361.00000,606.40625 L 372.31250,606.40625 L 362.87500,581.71875 C 368.20300,578.76673 370.59375,573.66350 370.59375,567.68750 C 370.59374,552.85552 358.34000,552.12500 351.50000,552.12500 L 340.71875,552.12500 z M 351.43750,561.00000 L 352.87500,561.00000 C 358.27499,560.71200 360.15625,563.86200 360.15625,567.75000 C 360.15624,572.21400 358.20850,575.53125 353.31250,575.53125 L 351.43750,575.46875 L 351.43750,561.00000 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#110000;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path870" />
|
||||
<path
|
||||
d="M 464.75000,552.12500 L 452.15625,606.40625 L 462.81250,606.40625 L 464.75000,597.28125 L 475.62500,597.28125 L 477.65625,606.40625 L 489.09375,606.40625 L 476.06250,552.12500 L 464.75000,552.12500 z M 469.93750,566.09375 L 470.09375,566.09375 L 471.31250,574.31250 L 474.06250,588.78125 L 466.50000,588.78125 L 468.78125,574.31250 L 469.93750,566.09375 z "
|
||||
style="font-size:72.000000;font-weight:bold;fill:#110000;stroke-width:1.0000000pt;font-family:Futura Std Condensed;"
|
||||
id="path871" />
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 12 KiB |
25
ljcom/htdocs/misc/pkg-webslave.txt
Normal file
@@ -0,0 +1,25 @@
|
||||
apache-perl
|
||||
aspell
|
||||
aspell-en
|
||||
less
|
||||
libcompress-zlib-perl
|
||||
libcrypt-ssleay-perl
|
||||
libdbd-mysql-perl
|
||||
libdbi-perl
|
||||
libdigest-md5-perl
|
||||
libgd-graph-perl
|
||||
libgd-perl
|
||||
libgd-text-perl
|
||||
libimage-size-perl
|
||||
libmime-lite-perl
|
||||
libnet-perl
|
||||
libproc-process-perl
|
||||
libsoap-lite-perl
|
||||
libssl09
|
||||
libunicode-maputf8-perl
|
||||
liburi-perl
|
||||
libwww-perl
|
||||
ntpdate
|
||||
postfix
|
||||
rsync
|
||||
snmpd
|
||||
79
ljcom/htdocs/misc/whereami.bml
Normal file
@@ -0,0 +1,79 @@
|
||||
<?page
|
||||
title=>Where are you?
|
||||
body<=
|
||||
<?_code
|
||||
{
|
||||
use strict;
|
||||
|
||||
# get a 'fake' remote ($u loaded from cookie) with no real authentication
|
||||
my $get_fake_remote = sub {
|
||||
|
||||
my ($authtype, $user, $sessid, $auth, $_sopts) =
|
||||
split(/:/, $BML::COOKIE{ljsession});
|
||||
|
||||
# fail unless it *seems* to be well-formed
|
||||
return undef unless $authtype eq "ws" && $sessid =~ /^\d+$/ && $auth =~ /^[a-zA-Z0-9]{10}$/;
|
||||
|
||||
my $u = LJ::load_user($user);
|
||||
return undef unless $u && $u->{statusvis} ne 'L';
|
||||
|
||||
return $u;
|
||||
};
|
||||
|
||||
my $remote = LJ::get_remote();
|
||||
my $remote_is_fake = 0;
|
||||
unless ($remote) {
|
||||
$remote_is_fake = 1;
|
||||
$remote = $get_fake_remote->();
|
||||
}
|
||||
return "Not logged in." unless $remote;
|
||||
|
||||
my $ret = "";
|
||||
|
||||
my $authas = $remote->{user};
|
||||
my $u = $remote;
|
||||
|
||||
# authas only works if $remote is not fake
|
||||
unless ($remote_is_fake) {
|
||||
|
||||
# logic to authenticate as alternate user
|
||||
$authas = $GET{'authas'} || $remote->{'user'};
|
||||
$u = LJ::get_authas_user($authas);
|
||||
return LJ::bad_input("You could not be authenticated as the specified user.")
|
||||
unless $u;
|
||||
|
||||
# authas switcher form
|
||||
$ret .= "<form method='get' action='whereami.bml'>";
|
||||
$ret .= LJ::make_authas_select($remote, { 'authas' => $GET{'authas'} });
|
||||
$ret .= "</form>";
|
||||
}
|
||||
|
||||
# human-readable cluster name
|
||||
my $name = LJ::get_cluster_description($u->{clusterid}, 1);
|
||||
|
||||
if ($remote_is_fake) {
|
||||
$ret .= "<p>You appear to be logged in as " . LJ::ljuser($authas) . ", which is on " .
|
||||
"$name, but your login session was unable to be retrieved, most likely " .
|
||||
"because $name is currently down. If you own any communities, you won't be able to " .
|
||||
"see where they are during this time. If they won't load, they're probably down for " .
|
||||
"maintenance.</p>";
|
||||
} else {
|
||||
$ret .= "<p>" . LJ::ljuser($authas) . " is on $name.</p>";
|
||||
}
|
||||
|
||||
# is their cluster down?
|
||||
unless (LJ::get_cluster_master($u)) {
|
||||
$ret .= "<?h2 Cluster Status Alert h2?>";
|
||||
$ret .= "<?p $name appears to be down, most likely for maintenance. " .
|
||||
"Please follow the " . LJ::ljuser('lj_maintenance', { type => 'C' }) . " journal for " .
|
||||
"further status updates. p?>";
|
||||
|
||||
$ret .= "<?p Further information about system-level outages can also be found at " .
|
||||
"<a href='http://status.livejournal.org/'>status.livejournal.org</a>. p?>";
|
||||
}
|
||||
|
||||
return $ret;
|
||||
}
|
||||
_code?>
|
||||
<=body
|
||||
page?>
|
||||