tag:blogger.com,1999:blog-8604237718292556142024-03-13T21:23:12.362-07:00LinelandEverything's a dot.Lars Georgehttp://www.blogger.com/profile/13990677998590435541noreply@blogger.comBlogger36125tag:blogger.com,1999:blog-860423771829255614.post-16305465922161628312012-03-14T01:48:00.000-07:002012-03-14T01:49:35.264-07:00Hadoop, HBase, and Xceivers<p>Some of the configuration properties found in Hadoop have a direct effect on clients, such as HBase. One of those properties is called "<span style="font-family: 'Andale Mono';">dfs.datanode.max.xcievers</span>", and belongs to the HDFS subproject. It defines the number of server side threads and - to some extent - sockets used for data connections. Setting this number too low can cause problems as you grow or increase utilization of your cluster. This post will help you to understand what happens between the client and server, and how to determine a reasonable number for this property.</p>
<p><u><b>The Problem</b></u>
<p>Since HBase is storing everything it needs inside HDFS, the hard upper boundary imposed by the "<span style="font-family: 'Andale Mono';">dfs.datanode.max.xcievers</span>" configuration property can result in too few resources being available to HBase, manifesting itself as IOExceptions on either side of the connection. Here is an example from the HBase mailing list [1], where the following messages were initially logged on the RegionServer side: </p>
<p>
<pre class="brush:plain; gutter: false;">
2008-11-11 19:55:52,451 INFO org.apache.hadoop.dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
2008-11-11 19:55:52,451 INFO org.apache.hadoop.dfs.DFSClient: Abandoning block blk_-5467014108758633036_595771
2008-11-11 19:55:58,455 WARN org.apache.hadoop.dfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
2008-11-11 19:55:58,455 WARN org.apache.hadoop.dfs.DFSClient: Error Recovery for block blk_-5467014108758633036_595771 bad datanode[0]
2008-11-11 19:55:58,482 FATAL org.apache.hadoop.hbase.regionserver.Flusher: Replay of hlog required. Forcing server shutdown
</pre></p>
<p>Correlating this with the Hadoop DataNode logs revealed the following entry:</p>
<p><pre class="brush:plain; gutter: false;">
ERROR org.apache.hadoop.dfs.DataNode: DatanodeRegistration(10.10.10.53:50010,storageID=DS-1570581820-10.10.10.53-50010-1224117842339,infoPort=50075, ipcPort=50020):DataXceiver: java.io.IOException: xceiverCount 258 exceeds the limit of concurrent xcievers 256
</pre></p>
<p>In this example, the low value of "<span style="font-family: 'Andale Mono';">dfs.datanode.max.xcievers</span>" for the DataNodes caused the entire RegionServer to shut down. This is a really bad situation. Unfortunately, there is no hard-and-fast rule that explains how to compute the required limit. It is commonly advised to raise the number from the default of 256 to something like 4096 (see [1], [2], [3], [4], and [5] for reference). This is done by adding this property to the <span style="font-family: 'Andale Mono';">hdfs-site.xml</span> file of all DataNodes (note that it is misspelled):</p>
<p>
<pre class="brush:xml; gutter: false;">
<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>
</pre></p>
<p>Note: You will need to restart your DataNodes after making this change to the configuration file.</p>
<p>This should help with the above problem, but you still might want to know more about how this all plays together, and what HBase is doing with these resources. We will discuss this in the remainder of this post. But before we do, we need to be clear about why you cannot simply set this number very high, say 64K and be done with it. There is a reason for an upper boundary, and it is twofold: first, threads need their own stack, which means they occupy memory. For current servers this means 1MB per thread[6] by default. In other words, if you use up all the 4096 <span style="font-family: 'Andale Mono';">DataXceiver</span> threads, you need around 4GB of heap to accommodate them. This cuts into the space you have assigned for memstores and block caches, as well as all the other moving parts of the JVM. In a worst case scenario, you might run into an <span style="font-family: 'Andale Mono';">OutOfMemoryException</span>, and the RegionServer process is toast. You want to set this property to a reasonably high number, but not too high either.</p>
<p>Second, having these many threads active you will also see your CPU becoming increasingly loaded. There will be many context switches happening to handle all the concurrent work, which takes away resources for the real work. As with the concerns about memory, you want the number of threads not grow boundlessly, but provide a reasonable upper boundary - and that is what "<span style="font-family: 'Andale Mono';">dfs.datanode.max.xcievers</span>" is for.</p>
<p><u><b>Hadoop File System Details</b></u></p>
<p>From the client side, the HDFS library is providing the abstraction called <span style="font-family: 'Andale Mono';">Path</span>. This class represents a file in a file system supported by Hadoop, represented by the <span style="font-family: 'Andale Mono';">FileSystem</span> class. There are a few concrete implementations of the abstract <span style="font-family: 'Andale Mono';">FileSystem</span> class, one of which is the <span style="font-family: 'Andale Mono';">DistributedFileSytem</span>, representing HDFS. This class in turn wraps the actual <span style="font-family: 'Andale Mono';">DFSClient</span> class that handles all interactions with the remote servers, i.e. the NameNode and the many DataNodes.</p>
<p>When a client, such as HBase, opens a file, it does so by, for example, calling the <span style="font-family: 'Andale Mono';">open()</span> or <span style="font-family: 'Andale Mono';">create()</span> methods of the <span style="font-family: 'Andale Mono';">FileSystem</span> class, here the most simplistic incarnations</p>
<p>
<pre class="brush:java; gutter: false;">
public DFSInputStream open(String src) throws IOException
public FSDataOutputStream create(Path f) throws IOException
</pre></p>
<p>The returned stream instance is what needs a server-side socket and thread, which are used to read and write blocks of data. They form part of the contract to exchange data between the client and server. Note that there are other, RPC-based protocols in use between the various machines, but for the purpose of this discussion they can be ignored.</p>
<p>The stream instance returned is a specialized <span style="font-family: 'Andale Mono';">DFSOutputStream</span> or <span style="font-family: 'Andale Mono';">DFSInputStream</span> class, which handle all of the interaction with the NameNode to figure out where the copies of the blocks reside, and the data communication per block per DataNode.</p>
<p>On the server side, the DataNode wraps an instance of <span style="font-family: 'Andale Mono';">DataXceiverServer</span>, which is the actual class that reads the above configuration key and also throws the above exception when the limit is exceeded.</p>
<p>When the DataNode starts, it creates a thread group and starts the mentioned <span style="font-family: 'Andale Mono';">DataXceiverServer</span> instance like so:</p>
<p>
<pre class="brush:java; gutter: false;">
this.threadGroup = new ThreadGroup("dataXceiverServer");
this.dataXceiverServer = new Daemon(threadGroup,
new DataXceiverServer(ss, conf, this));
this.threadGroup.setDaemon(true); // auto destroy when empty
</pre></p>
<p>Note that the <span style="font-family: 'Andale Mono';">DataXceiverServer</span> thread is already taking up one spot of the thread group. The DataNode also has this internal class to retrieve the number of currently active threads in this group:</p>
<p>
<pre class="brush:java; gutter: false;">
/** Number of concurrent xceivers per node. */
int getXceiverCount() {
return threadGroup == null ? 0 : threadGroup.activeCount();
}
</pre></p>
<p>Reading and writing blocks, as initiated by the client, causes for a connection to be made, which is wrapped by the <span style="font-family: 'Andale Mono';">DataXceiverServer</span> thread into a <span style="font-family: 'Andale Mono';">DataXceiver</span> instance. During this hand off, a thread is created and registered in the above thread group. So for every active read and write operation a new thread is tracked on the server side. If the count of threads in the group exceeds the configured maximum then the said exception is thrown and recorded in the DataNode's logs:</p>
<p>
<pre class="brush:java; gutter: false;">
if (curXceiverCount > dataXceiverServer.maxXceiverCount) {
throw new IOException("xceiverCount " + curXceiverCount
+ " exceeds the limit of concurrent xcievers "
+ dataXceiverServer.maxXceiverCount);
}
</pre></p>
<p><u><b>Implications for Clients</b></u></p>
<p>Now, the question is, how does the client reading and writing relate to the server side threads. Before we go into the details though, let's use the debug information that the <span style="font-family: 'Andale Mono';">DataXceiver</span> class logs when it is created and closed</p>
<p>
<pre class="brush:java; gutter: false;">
LOG.debug("Number of active connections is: " + datanode.getXceiverCount());
...
LOG.debug(datanode.dnRegistration + ":Number of active connections is: "
+ datanode.getXceiverCount());
</pre></p>
<p>and monitor during a start of HBase what is logged on the DataNode. For simplicity's sake this is done on a pseudo distributed setup with a single DataNode and RegionServer instance. The following shows the top of the RegionServer's status page.</p>
<p>
<a href="http://2.bp.blogspot.com/-H8YuCF5H87w/T1_Gue6g1nI/AAAAAAAAAB8/xyrUcARu72c/s1600/HadoopHBaseXceiverScreen1.jpg" imageanchor="1" style="margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-H8YuCF5H87w/T1_Gue6g1nI/AAAAAAAAAB8/xyrUcARu72c/s1600/HadoopHBaseXceiverScreen1.jpg" /></a>
</p>
<p>The important part is in the "Metrics" section, where it says "storefiles=22". So, assuming that HBase has at least that many files to handle, plus some extra files for the write-ahead log, we should see the above logs message state that we have at least 22 "active connections". Let's start HBase and check the DataNode and RegionServer log files:</p>
<p>Command Line:</p>
<p>
<pre class="brush:plain; gutter: false;">
$ bin/start-hbase.sh
...
</pre></p>
<p>DataNode Log:</p>
<pre class="brush:plain; gutter: false;">
2012-03-05 13:01:35,309 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 1
2012-03-05 13:01:35,315 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 2
12/03/05 13:01:35 INFO regionserver.MemStoreFlusher: globalMemStoreLimit=396.7m, globalMemStoreLimitLowMark=347.1m, maxHeap=991.7m
12/03/05 13:01:39 INFO http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 60030
2012-03-05 13:01:40,003 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 1
12/03/05 13:01:40 INFO regionserver.HRegionServer: Received request to open region: -ROOT-,,0.70236052
2012-03-05 13:01:40,882 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 3
2012-03-05 13:01:40,884 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 4
2012-03-05 13:01:40,888 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 3
2012-03-05 13:01:40,902 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 4
2012-03-05 13:01:40,907 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 3
2012-03-05 13:01:40,909 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 4
2012-03-05 13:01:40,910 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 3
2012-03-05 13:01:40,911 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 4
2012-03-05 13:01:40,915 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 3
2012-03-05 13:01:40,917 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 4
2012-03-05 13:01:40,917 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 4
2012-03-05 13:01:40,919 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 4
12/03/05 13:01:40 INFO regionserver.HRegion: Onlined -ROOT-,,0.70236052; next sequenceid=63083
2012-03-05 13:01:40,982 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 3
2012-03-05 13:01:40,983 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 4
2012-03-05 13:01:40,985 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 3
2012-03-05 13:01:40,987 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 4
2012-03-05 13:01:40,987 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 4
2012-03-05 13:01:40,989 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 4
12/03/05 13:01:41 INFO regionserver.HRegionServer: Received request to open region: .META.,,1.1028785192
2012-03-05 13:01:41,026 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 3
2012-03-05 13:01:41,027 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 4
2012-03-05 13:01:41,028 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 3
2012-03-05 13:01:41,029 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 4
2012-03-05 13:01:41,035 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 3
2012-03-05 13:01:41,037 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 4
2012-03-05 13:01:41,038 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 3
2012-03-05 13:01:41,040 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 4
2012-03-05 13:01:41,044 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 3
2012-03-05 13:01:41,047 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 4
2012-03-05 13:01:41,048 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 3
2012-03-05 13:01:41,051 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 4
12/03/05 13:01:41 INFO regionserver.HRegion: Onlined .META.,,1.1028785192; next sequenceid=63082
2012-03-05 13:01:41,109 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 3
2012-03-05 13:01:41,114 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 4
2012-03-05 13:01:41,117 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 5
12/03/05 13:01:41 INFO regionserver.HRegionServer: Received request to open 16 region(s)
12/03/05 13:01:41 INFO regionserver.HRegionServer: Received request to open region: usertable,,1330944810191.62a312d67981c86c42b6bc02e6ec7e3f.
12/03/05 13:01:41 INFO regionserver.HRegionServer: Received request to open region: usertable,user1120311784,1330944810191.90d287473fe223f0ddc137020efda25d.
12/03/05 13:01:41 INFO regionserver.HRegionServer: Received request to open region: usertable,user1240813553,1330944811370.12c95922805e2cb5274396a723a94fa8.
12/03/05 13:01:41 INFO regionserver.HRegionServer: Received request to open region: usertable,user1361265841,1330944811370.80663fcf291e3ce00080599964f406ba.
12/03/05 13:01:41 INFO regionserver.HRegionServer: Received request to open region: usertable,user1481880893,1330944827566.fb3cc692757825e24295042fd42ff07c.
12/03/05 13:01:41 INFO regionserver.HRegionServer: Received request to open region: usertable,user1602709751,1330944827566.dbd84a9c2a2e450799b10e7408e3e12e.
12/03/05 13:01:41 INFO regionserver.HRegionServer: Received request to open region: usertable,user1723694337,1330944838296.cb9e191f0d8c0d8b64c192a52e35a6f0.
12/03/05 13:01:41 INFO regionserver.HRegionServer: Received request to open region: usertable,user1844378668,1330944838296.577cc1efe165859be1341a9e1c566b12.
12/03/05 13:01:41 INFO regionserver.HRegionServer: Received request to open region: usertable,user1964968041,1330944848231.dd89596e9129e1caa7e07f8a491c9734.
12/03/05 13:01:41 INFO regionserver.HRegionServer: Received request to open region: usertable,user2084845901,1330944848231.24413155fef16ebb00213b8072bc266b.
12/03/05 13:01:41 INFO regionserver.HRegionServer: Received request to open region: usertable,user273134820,1330944848768.86d2a0254822edc967c0f27fe53e7734.
12/03/05 13:01:41 INFO regionserver.HRegionServer: Received request to open region: usertable,user394249140,1330944848768.df37ded27f9ba0f9ed76b2eb05db6c4e.
12/03/05 13:01:41 INFO regionserver.HRegionServer: Received request to open region: usertable,user515290649,1330944849739.d23924dc9e9d5891f332c337977af83d.
12/03/05 13:01:41 INFO regionserver.HRegionServer: Received request to open region: usertable,user63650623,1330944849739.a3b9f64e6abee00a2f39c33efa7ae8a2.
12/03/05 13:01:41 INFO regionserver.HRegionServer: Received request to open region: usertable,user757669512,1330944850808.cd0d6f16d8ae9cf0c9277f5d6c6c6b9f.
12/03/05 13:01:41 INFO regionserver.HRegionServer: Received request to open region: usertable,user878854551,1330944850808.9b8667b6d3c0baaaea491527cd781357.
2012-03-05 13:01:41,246 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 6
2012-03-05 13:01:41,248 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 7
2012-03-05 13:01:41,249 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 8
2012-03-05 13:01:41,256 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 9
2012-03-05 13:01:41,257 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 10
2012-03-05 13:01:41,257 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 9
2012-03-05 13:01:41,257 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 8
2012-03-05 13:01:41,257 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 7
2012-03-05 13:01:41,259 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 8
2012-03-05 13:01:41,260 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 9
2012-03-05 13:01:41,264 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 8
2012-03-05 13:01:41,283 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 7
12/03/05 13:01:41 INFO regionserver.HRegion: Onlined usertable,user1120311784,1330944810191.90d287473fe223f0ddc137020efda25d.; next sequenceid=62917
12/03/05 13:01:41 INFO regionserver.HRegion: Onlined usertable,,1330944810191.62a312d67981c86c42b6bc02e6ec7e3f.; next sequenceid=62916
12/03/05 13:01:41 INFO regionserver.HRegion: Onlined usertable,user1240813553,1330944811370.12c95922805e2cb5274396a723a94fa8.; next sequenceid=62918
2012-03-05 13:01:41,360 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 6
2012-03-05 13:01:41,361 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 7
2012-03-05 13:01:41,366 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 8
2012-03-05 13:01:41,366 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 7
2012-03-05 13:01:41,366 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 8
2012-03-05 13:01:41,368 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 9
2012-03-05 13:01:41,369 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 8
2012-03-05 13:01:41,370 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 9
2012-03-05 13:01:41,370 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 8
2012-03-05 13:01:41,372 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 9
2012-03-05 13:01:41,375 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 8
2012-03-05 13:01:41,377 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 7
12/03/05 13:01:41 INFO regionserver.HRegion: Onlined usertable,user1481880893,1330944827566.fb3cc692757825e24295042fd42ff07c.; next sequenceid=62919
12/03/05 13:01:41 INFO regionserver.HRegion: Onlined usertable,user1602709751,1330944827566.dbd84a9c2a2e450799b10e7408e3e12e.; next sequenceid=62920
12/03/05 13:01:41 INFO regionserver.HRegion: Onlined usertable,user1361265841,1330944811370.80663fcf291e3ce00080599964f406ba.; next sequenceid=62919
2012-03-05 13:01:41,474 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 6
2012-03-05 13:01:41,491 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 7
2012-03-05 13:01:41,495 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 8
2012-03-05 13:01:41,508 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 7
2012-03-05 13:01:41,512 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 8
2012-03-05 13:01:41,513 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 7
2012-03-05 13:01:41,514 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 6
2012-03-05 13:01:41,521 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 7
2012-03-05 13:01:41,533 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 6
12/03/05 13:01:41 INFO regionserver.HRegion: Onlined usertable,user1844378668,1330944838296.577cc1efe165859be1341a9e1c566b12.; next sequenceid=62918
12/03/05 13:01:41 INFO regionserver.HRegion: Onlined usertable,user1723694337,1330944838296.cb9e191f0d8c0d8b64c192a52e35a6f0.; next sequenceid=62917
2012-03-05 13:01:41,540 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 7
2012-03-05 13:01:41,542 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 6
2012-03-05 13:01:41,548 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 7
12/03/05 13:01:41 INFO regionserver.HRegion: Onlined usertable,user1964968041,1330944848231.dd89596e9129e1caa7e07f8a491c9734.; next sequenceid=62920
2012-03-05 13:01:41,618 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 6
2012-03-05 13:01:41,621 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 7
2012-03-05 13:01:41,636 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 6
2012-03-05 13:01:41,649 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 7
2012-03-05 13:01:41,650 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 8
2012-03-05 13:01:41,654 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 7
2012-03-05 13:01:41,662 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 6
2012-03-05 13:01:41,665 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 7
12/03/05 13:01:41 INFO regionserver.HRegion: Onlined usertable,user2084845901,1330944848231.24413155fef16ebb00213b8072bc266b.; next sequenceid=62921
12/03/05 13:01:41 INFO regionserver.HRegion: Onlined usertable,user273134820,1330944848768.86d2a0254822edc967c0f27fe53e7734.; next sequenceid=62924
2012-03-05 13:01:41,673 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 6
2012-03-05 13:01:41,677 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 7
2012-03-05 13:01:41,679 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 6
2012-03-05 13:01:41,687 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 7
12/03/05 13:01:41 INFO regionserver.HRegion: Onlined usertable,user394249140,1330944848768.df37ded27f9ba0f9ed76b2eb05db6c4e.; next sequenceid=62925
2012-03-05 13:01:41,790 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 6
2012-03-05 13:01:41,800 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 7
2012-03-05 13:01:41,806 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 6
2012-03-05 13:01:41,815 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 7
2012-03-05 13:01:41,819 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 8
2012-03-05 13:01:41,821 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 7
2012-03-05 13:01:41,824 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 6
2012-03-05 13:01:41,825 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 7
2012-03-05 13:01:41,827 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 8
2012-03-05 13:01:41,829 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 7
12/03/05 13:01:41 INFO regionserver.HRegion: Onlined usertable,user515290649,1330944849739.d23924dc9e9d5891f332c337977af83d.; next sequenceid=62926
2012-03-05 13:01:41,832 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 6
2012-03-05 13:01:41,838 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 7
12/03/05 13:01:41 INFO regionserver.HRegion: Onlined usertable,user757669512,1330944850808.cd0d6f16d8ae9cf0c9277f5d6c6c6b9f.; next sequenceid=62929
12/03/05 13:01:41 INFO regionserver.HRegion: Onlined usertable,user63650623,1330944849739.a3b9f64e6abee00a2f39c33efa7ae8a2.; next sequenceid=62927
2012-03-05 13:01:41,891 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 6
2012-03-05 13:01:41,893 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 7
2012-03-05 13:01:41,894 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 6
2012-03-05 13:01:41,896 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 7
12/03/05 13:01:41 INFO regionserver.HRegion: Onlined usertable,user878854551,1330944850808.9b8667b6d3c0baaaea491527cd781357.; next sequenceid=62930
2012-03-05 14:01:39,514 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 5
2012-03-05 14:01:39,711 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 4
2012-03-05 22:48:41,945 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 4
12/03/05 22:48:41 INFO regionserver.HRegion: Onlined usertable,user757669512,1330944850808.cd0d6f16d8ae9cf0c9277f5d6c6c6b9f.; next sequenceid=62929
2012-03-05 22:48:41,963 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 4
</pre></p>
<p>You can see how the regions are opened one after the other, but what you also might notice is that the number of active connections never climbs to 22 - it barely even reaches 10. Why is that? To understand this better, we have to see how files in HDFS map to the server-side <span style="font-family: 'Andale Mono';">DataXceiver</span>'s instance - and the actual threads they represent. </p>
<p><u><b>Hadoop Deep Dive</b></u></p>
<p>The aforementioned <span style="font-family: 'Andale Mono';">DFSInputStream</span> and <span style="font-family: 'Andale Mono';">DFSOutputStream</span> are really facades around the usual stream concepts. They wrap the client-server communication into these standard Java interfaces, while internally routing the traffic to a selected DataNode - which is the one that holds a copy of the current block. It has the liberty to open and close these connection as needed. As a client reads a file in HDFS, the client library classes switch transparently from block to block, and therefore from DataNode to DataNode, so it has to open and close connections as needed. </p>
<p>The <span style="font-family: 'Andale Mono';">DFSInputStream</span> has an instance of a <span style="font-family: 'Andale Mono';">DFSClient.BlockReader</span> class, that opens the connection to the DataNode. The stream instance calls <span style="font-family: 'Andale Mono';">blockSeekTo()</span> for every call to <span style="font-family: 'Andale Mono';">read()</span> which takes care of opening the connection, if there is none already. Once a block is completely read the connection is closed. Closing the stream has the same effect of course. </p>
<p>The <span style="font-family: 'Andale Mono';">DFSOutputStream</span> has a similar helper class, the <span style="font-family: 'Andale Mono';">DataStreamer</span>. It tracks the connection to the server, which is initiated by the n<span style="font-family: 'Andale Mono';">extBlockOutputStream() m</span>ethod. It has further internal classes that help with writing the block data out, which we omit here for the sake of brevity.</p>
<p>Both writing and reading blocks requires a thread to hold the socket and intermediate data on the server-side, wrapped in the <span style="font-family: 'Andale Mono';">DataXceiver</span> instance. Depending what your client is doing, you will see the number of connections fluctuate around the number of currently accessed files in HDFS.</p>
<p>Back to the HBase riddle above: the reason you do not see up to 22 (and more) connections during the start is that while the regions open, the only required data is the HFile's info block. This block is read to gain vital details about each file, but then closed again. This means that the server-side resource is released in quick succession. The remaining four connections are harder to determine. You can use JStack to dump all threads on the DataNode, which in this example shows this entry:</p>
<p>
<pre class="brush:plain; gutter: false;">
"DataXceiver for client /127.0.0.1:64281 [sending block blk_5532741233443227208_4201]" daemon prio=5 tid=7fb96481d000 nid=0x1178b4000 runnable [1178b3000]
java.lang.Thread.State: RUNNABLE
...
"DataXceiver for client /127.0.0.1:64172 [receiving block blk_-2005512129579433420_4199 client=DFSClient_hb_rs_10.0.0.29,60020,1330984111693_1330984118810]" daemon prio=5 tid=7fb966109000 nid=0x1169cb000 runnable [1169ca000]
java.lang.Thread.State: RUNNABLE
...
</pre></p>
<p>These are the only <span style="font-family: 'Andale Mono';">DataXceiver</span> entries (in this example), so the count in the thread group is a bit misleading. Recall that the <span style="font-family: 'Andale Mono';">DataXceiverServer</span> daemon thread already accounts for one extra entry, which combined with the two above accounts for the three active connections - which in fact means three active threads. The reason the log states four instead, is that it logs the count from an active thread that is about to finish. So, shortly after the count of four is logged, it is actually one less, i.e. three and hence matching our head count of active threads.</p>
<p>Also note that the internal helper classes, such as the <span style="font-family: 'Andale Mono';">PacketResponder</span> occupy another thread in the group while being active. The JStack output does indicate that fact, listing the thread as such:</p>
<p>
<pre class="brush:plain; gutter: false;">
"PacketResponder 0 for Block blk_-2005512129579433420_4199" daemon prio=5 tid=7fb96384d000 nid=0x116ace000 in Object.wait() [116acd000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.lastDataNodeRun(BlockReceiver.java:779)
- locked <7bc79c030> (a org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:870)
at java.lang.Thread.run(Thread.java:680)
</pre></p>
<p>This thread is currently in <span style="font-family: 'Andale Mono';">TIMED_WAITING</span> state and is not considered active. That is why the count emitted by the <span style="font-family: 'Andale Mono';">DataXceiver</span> log statements is not including these kind of threads. If they become active due to the client sending sending data, the active thread count will go up again. Another thing to note its that this thread does not need a separate connection, or socket, between the client and the server. The <span style="font-family: 'Andale Mono';">PacketResponder</span> is just a thread on the server side to receive block data and stream it to the next DataNode in the write pipeline.</p>
<p>The Hadoop fsck command also has an option to report what files are currently open for writing:</p>
<p>
<pre class="brush:plain; gutter: false;">
$ hadoop fsck /hbase -openforwrite
FSCK started by larsgeorge from /10.0.0.29 for path /hbase at Mon Mar 05 22:59:47 CET 2012
....../hbase/.logs/10.0.0.29,60020,1330984111693/10.0.0.29%3A60020.1330984118842 0 bytes, 1 block(s), OPENFORWRITE: ......................................Status: HEALTHY
Total size: 2088783626 B
Total dirs: 54
Total files: 45
...
</pre></p>
<p>This does not immediately relate to an occupied server-side thread, as these are allocated by block ID. But you can glean from it, that there is one open block for writing. The Hadoop command has additional options to print out the actual files and block ID they are comprised of:</p>
<p>
<pre class="brush:plain; gutter: false;">
$ hadoop fsck /hbase -files -blocks
FSCK started by larsgeorge from /10.0.0.29 for path /hbase at Tue Mar 06 10:39:50 CET 2012
...
/hbase/.META./1028785192/.tmp <dir>
/hbase/.META./1028785192/info <dir>
/hbase/.META./1028785192/info/4027596949915293355 36517 bytes, 1 block(s): OK
0. blk_5532741233443227208_4201 len=36517 repl=1
...
Status: HEALTHY
Total size: 2088788703 B
Total dirs: 54
Total files: 45 (Files currently being written: 1)
Total blocks (validated): 64 (avg. block size 32637323 B) (Total open file blocks (not validated): 1)
Minimally replicated blocks: 64 (100.0 %)
...
</pre></p>
<p>This gives you two things. First, the summary states that there is one open file block at the time the command ran - matching the count reported by the "<span style="font-family: 'Andale Mono';">-openforwrite</span>" option above. Secondly, the list of blocks next to each file lets you match the thread name to the file that contains the block being accessed. In this example the block with the ID "<span style="font-family: 'Andale Mono';">blk_5532741233443227208_4201</span>" is sent from the server to the client, here a RegionServer. This block belongs to the HBase <span style="font-family: 'Andale Mono';">.META.</span> table, as shown by the output of the Hadoop fsck command. The combination of JStack and fsck can serve as a poor mans replacement for lsof (a tool on the Linux command line to "list open files").</p>
<p>The JStack also reports that there is a <span style="font-family: 'Andale Mono';">DataXceiver</span> thread, with an accompanying <span style="font-family: 'Andale Mono';">PacketResponder</span>, for block ID "<span style="font-family: 'Andale Mono';">blk_-2005512129579433420_4199</span>", but this ID is missing from the list of blocks reported by fsck. This is because the block is not yet finished and therefore not available to readers. In other words, Hadoop fsck only reports on complete (or synced[7][8], for Hadoop version that support this feature) blocks. </p>
<hr />
<p><u><b>Practical Example</b></u></p>
<p>We can verify this using the HBase JRuby shell. For this exercise we should stop HBase, which will close out all open files, remove all active DataXceiver threads from the JStack output, and reduce the number of active connections as reported by the DataNode's debug logs to one - the server thread, as you know by now.</p>
<p><u>Write Data</u></p>
<p>Let's start with the process of writing data. Open the HBase shell and in another terminal check the content of the file system:</p>
<p>
<pre class="brush:plain; gutter: false;">
$ hadoop dfs -ls /
Found 3 items
drwxr-xr-x - larsgeorge supergroup 0 2012-03-04 14:13 /Users
drwxr-xr-x - larsgeorge supergroup 0 2011-11-15 11:17 /Volumes
drwxr-xr-x - larsgeorge supergroup 0 2012-03-05 11:41 /hbase
</pre></p>
<p>Now in the HBase shell, we enter these commands, while at the same time checking the output of the DataNode logs:</p>
<p>
<pre class="brush:plain; gutter: false;">
$ hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.90.4-cdh3u2, r, Thu Oct 13 20:32:26 PDT 2011
hbase(main):001:0> import org.apache.hadoop.hdfs.DFSClient
=> Java::OrgApacheHadoopHdfs::DFSClient
hbase(main):002:0> import org.apache.hadoop.conf.Configuration
=> Java::OrgApacheHadoopConf::Configuration
hbase(main):003:0> dfs = DFSClient.new(Configuration.new)
=> #<Java::OrgApacheHadoopHdfs::DFSClient:0x552c937c>
hbase(main):004:0> s = dfs.create('/testfile', false)
12/03/06 11:54:51 DEBUG hdfs.DFSClient: /testfile: masked=rwxr-xr-x
12/03/06 11:54:51 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/testfile, chunkSize=516, chunksPerPacket=127, packetSize=65557
=> #<#<Class:0x1521bfefb>:0x3abb1bc4>
</pre></p>
<p>It means we have created an output stream to a file in HDFS. So far this file only appears in the HDFS ls command, but not in the JStack threads, nor in the logs as an increased active connections count.</p>
<p>
<pre class="brush:plain; gutter: false;">
$ hadoop dfs -ls /
Found 4 items
drwxr-xr-x - larsgeorge supergroup 0 2012-03-04 14:13 /Users
drwxr-xr-x - larsgeorge supergroup 0 2011-11-15 11:17 /Volumes
drwxr-xr-x - larsgeorge supergroup 0 2012-03-05 11:41 /hbase
-rw-r--r-- 1 larsgeorge supergroup 0 2012-03-06 11:54 /testfile
</pre></p>
<p>The file size is zero bytes as expected. This operation we have just performed, is a pure meta operation, only involving the NameNode. We have not written anything yet, so no DataNode is involved. We start to write into the stream like so:</p>
<p>
<pre class="brush:plain; gutter: false;">
hbase(main):005:0> s.write(1)
hbase(main):006:0> s.write(1)
hbase(main):007:0> s.write(1)
</pre></p>
<p>Again, nothing changes, no block is being generated, because the data is buffered on the client side. We have the choice to <span style="font-family: 'Andale Mono';">close</span> or <span style="font-family: 'Andale Mono';">sync</span> the data to HDFS next to set the wheels in motion. Using <span style="font-family: 'Andale Mono';">close</span> would quickly write the block and then close everything down - too quick for us to observe the thread creation. Only the logs should state the increase and decrease of the thread count. We rather use <span style="font-family: 'Andale Mono';">sync</span> to flush out the few bytes we have written, but keep the block open for writing:</p>
<p>
<pre class="brush:plain; gutter: false;">
hbase(main):008:0> s.sync
12/03/06 12:04:04 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=0, src=/testfile, packetSize=65557, chunksPerPacket=127, bytesCurBlock=0
12/03/06 12:04:04 DEBUG hdfs.DFSClient: DFSClient flush() : saveOffset 0 bytesCurBlock 3 lastFlushOffset 0
12/03/06 12:04:04 DEBUG hdfs.DFSClient: Allocating new block
12/03/06 12:04:04 DEBUG hdfs.DFSClient: pipeline = 127.0.0.1:50010
12/03/06 12:04:04 DEBUG hdfs.DFSClient: Connecting to 127.0.0.1:50010
12/03/06 12:04:04 DEBUG hdfs.DFSClient: Send buf size 131072
12/03/06 12:04:04 DEBUG hdfs.DFSClient: DataStreamer block blk_1029135823215149276_4209 wrote packet seqno:0 size:32 offsetInBlock:0 lastPacketInBlock:false
12/03/06 12:04:04 DEBUG hdfs.DFSClient: DFSClient Replies for seqno 0 are SUCCESS
</pre></p>
<p>Note: The logging level of HBase is set to DEBUG, which causes for the above log statements to be printed on the console.</p>
<p>The logs show an increase by one of the number of active connections, and JStack lists the two threads involved in the writing process:</p>
<p>DataNode Log:</p>
<p>
<pre class="brush:plain; gutter: false;">
2012-03-06 12:04:04,457 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 1
</pre></p>
<p>JStack Output:</p>
<p>
<pre class="brush:plain; gutter: false;">
"PacketResponder 0 for Block blk_1029135823215149276_4209" daemon prio=5 tid=7fb96395e800 nid=0x116ace000 in Object.wait() [116acd000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.lastDataNodeRun(BlockReceiver.java:779)
- locked <7bcf01050> (a org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:870)
at java.lang.Thread.run(Thread.java:680)
...
"DataXceiver for client /127.0.0.1:50556 [receiving block blk_1029135823215149276_4209 client=DFSClient_-1401479495]" daemon prio=5 tid=7fb9660c7000 nid=0x1169cb000 runnable [1169ca000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method)
at sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:136)
...
</pre></p>
<p>Closing the stream will finalize the block, close all open threads that were needed (two in this example), and log the decreased active connection count: </p>
<p>HBase Shell:</p>
<p>
<pre class="brush:plain; gutter: false;">
hbase(main):011:0> s.close
12/03/06 12:08:36 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=1, src=/testfile, packetSize=65557, chunksPerPacket=127, bytesCurBlock=0
12/03/06 12:08:36 DEBUG hdfs.DFSClient: DataStreamer block blk_1029135823215149276_4209 wrote packet seqno:1 size:32 offsetInBlock:0 lastPacketInBlock:true
12/03/06 12:08:36 DEBUG hdfs.DFSClient: DFSClient Replies for seqno 1 are SUCCESS
12/03/06 12:08:36 DEBUG hdfs.DFSClient: Closing old block blk_1029135823215149276_4209
</pre></p>
<p>DataNode Log:</p>
<p>
<pre class="brush:plain; gutter: false;">
2012-03-06 12:08:36,551 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 2
</pre></p>
<p>The interesting part about the log statements is that they are printed <i>before</i> the thread is started, and <i>before</i> it is ended, meaning it will show one less, and one too many respectively. Also recall, that while the block is written to, it is accounted for in the Hadoop fsck's "<span style="font-family: 'Andale Mono';">-blocks</span>" or "<span style="font-family: 'Andale Mono';">-openforwrite</span>".</p>
<p><u>Read Data</u></p>
<p>Pretty much the same overall goes for reading data:</p>
<p>
<pre class="brush:plain; gutter: false;">
hbase(main):012:0> r = dfs.open('/testfile')
=> #<Java::OrgApacheHadoopHdfs::DFSClient::DFSInputStream:0xde81d48>
</pre></p>
<p>When you create the input stream, nothing happens on the server-side, as the client has yet to indicate what part of the file it wants to read. If you start to read, the client is routed to the proper server using the "seek to" call internally:</p>
<p>DataNode Log:</p>
<p>
<pre class="brush:plain; gutter: false;">
2012-03-06 12:18:02,055 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 1
</pre></p>
<p>HBase Shell:</p>
<p>
<pre class="brush:plain; gutter: false;">
hbase(main):013:0> r.read
12/03/06 12:18:02 DEBUG fs.FSInputChecker: DFSClient readChunk got seqno 0 offsetInBlock 0 lastPacketInBlock true packetLen 11
=> 1
</pre></p>
<p>DataNode Log:</p>
<p>
<pre class="brush:plain; gutter: false;">
2012-03-06 12:18:02,110 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 2
</pre></p>
<p>Since an entire buffer size - configured with "<span style="font-family: 'Andale Mono';">io.file.buffer.size</span>", and set to 4096 (4KB) by default - worth of data is read, and our file was very small (3 Bytes), it was read in one go and the server-side thread was released right away. If we were to read a larger file, then a connection remains open for reading of chunk after chunk of the entire block. We can pick a large file from the HBase directory, open the input stream, and start reading byte by byte:</p>
<p>Command Line:</p>
<p>
<pre class="brush:plain; gutter: false;">
$ hadoop dfs -lsr /hbase
drwxr-xr-x - larsgeorge supergroup 0 2012-03-03 12:33 /hbase/-ROOT-
...
-rw-r--r-- 1 larsgeorge supergroup 111310855 2012-03-05 11:54 /hbase/usertable/12c95922805e2cb5274396a723a94fa8/family/1454340804524549239
...
</pre></p>
<p>HBase Shell:</p>
<p>
<pre class="brush:plain; gutter: false;">
hbase(main):015:0> r2 = dfs.open('/hbase/usertable/12c95922805e2cb5274396a723a94fa8/family/1454340804524549239')
=> #<Java::OrgApacheHadoopHdfs::DFSClient::DFSInputStream:0x92524b0>
hbase(main):016:0> r2.read
12/03/06 12:26:12 DEBUG fs.FSInputChecker: DFSClient readChunk got seqno 0 offsetInBlock 0 lastPacketInBlock false packetLen 66052
=> 68
</pre></p>
<p>DataNode Log:</p>
<p>
<pre class="brush:plain; gutter: false;">
2012-03-06 12:26:12,294 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 1
</pre></p>
<p>HBase Shell:</p>
<p>
<pre class="brush:plain; gutter: false;">
hbase(main):017:0> r2.read
=> 65
hbase(main):018:0> r2.read
=> 84
</pre></p>
<p>JStack Output:</p>
<p>
<pre class="brush:plain; gutter: false;">
"DataXceiver for client /127.0.0.1:52504 [sending block blk_7972699930188289769_4166]" daemon prio=5 tid=7fb965961800 nid=0x116670000 runnable [11666f000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method)
...
</pre></p>
<p>The JStack output shows how the thread is kept running on the server to serve more data as requested by the client. If you close the stream, the resource is freed subsequently, just as expected:</p>
<p>HBase Shell:</p>
<p>
<pre class="brush:plain; gutter: false;">
hbase(main):019:0> r2.close
</pre></p>
<p>DataNode Log:</p>
<p>
<pre class="brush:plain; gutter: false;">
2012-03-06 12:31:16,659 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 2
</pre></p>
<p>So <i>after</i> the thread really exits (should be in the nano- or millisecond range) the count will go back to the minimum of one - which is the <span style="font-family: 'Andale Mono';">DataXceiverServer</span> thread, yes you are correct. :)</p>
<hr />
<p><u><b>Back to HBase</b></u></p>
<p>Opening all the regions does not need as many resources on the server as you would have expected. If you scan the entire HBase table though, you force HBase to read all of the blocks in all HFiles:</p>
<p>HBase Shell:</p>
<p>
<pre class="brush:plain; gutter: false;">
hbase(main):003:0> scan 'usertable'
...
1000000 row(s) in 1460.3120 seconds
</pre></p>
<p>DataNode Log:</p>
<p>
<pre class="brush:plain; gutter: false;">
2012-03-05 14:42:20,580 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 6
2012-03-05 14:43:23,293 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 7
2012-03-05 14:43:23,299 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 8
2012-03-05 14:44:00,255 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 7
2012-03-05 14:44:53,566 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 8
2012-03-05 14:44:53,567 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 7
2012-03-05 14:45:33,562 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 8
2012-03-05 14:46:25,074 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 9
2012-03-05 14:46:25,075 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 10
2012-03-05 14:47:07,854 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 9
2012-03-05 14:47:58,244 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 10
2012-03-05 14:47:58,244 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 9
2012-03-05 14:48:30,010 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 10
2012-03-05 14:49:24,332 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 11
2012-03-05 14:49:24,332 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 10
2012-03-05 14:49:59,987 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 11
2012-03-05 14:51:12,603 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 12
2012-03-05 14:51:12,605 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 11
2012-03-05 14:51:46,473 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 12
2012-03-05 14:52:37,052 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 13
2012-03-05 14:52:37,053 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 12
2012-03-05 14:53:20,047 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 13
2012-03-05 14:54:11,859 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 14
2012-03-05 14:54:11,860 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 15
2012-03-05 14:54:43,615 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 14
2012-03-05 14:55:36,214 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 15
2012-03-05 14:55:36,215 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 14
2012-03-05 14:56:10,440 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 15
2012-03-05 14:56:59,419 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 16
2012-03-05 14:56:59,420 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 15
2012-03-05 14:57:31,722 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 16
2012-03-05 14:58:24,909 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 17
2012-03-05 14:58:24,910 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 16
2012-03-05 14:58:57,186 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 17
2012-03-05 14:59:47,294 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 18
2012-03-05 14:59:47,294 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 17
2012-03-05 15:00:23,101 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 18
2012-03-05 15:01:14,853 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 19
2012-03-05 15:01:14,854 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 18
2012-03-05 15:01:47,388 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 19
2012-03-05 15:02:39,900 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 20
2012-03-05 15:02:39,901 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 19
2012-03-05 15:03:18,794 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 20
2012-03-05 15:04:17,688 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 21
2012-03-05 15:04:17,689 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 22
2012-03-05 15:04:54,545 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 21
2012-03-05 15:05:55,901 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1423642448-10.0.0.64-50010-1321352233772, infoPort=50075, ipcPort=50020):Number of active connections is: 22
2012-03-05 15:05:55,901 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: Number of active connections is: 21
</pre></p>
<p>The number of active connections reaches the elusive 22 now. Note that this count already includes the server thread, so we are still a little short of what we could consider the theoretical maximum - based on the number of files HBase has to handle.</p>
<p><b><u>What does that all mean?</u></b></p>
<p>So, how many "xcievers (sic)" do you need? Given you only use HBase, you could simply monitor the above "storefiles" metric (which you get also through Ganglia or JMX) and add a few percent for intermediate and write-ahead log files. This should work for systems in motion. However, if you were to determine that number on an idle, fully compacted system and assume it is the maximum, you might find this number being too low once you start adding more store files during regular memstore flushes, i.e. as soon as you start to add data to the HBase tables. Or if you also use MapReduce on that same cluster, Flume log aggregation, and so on. You will need to account for those extra files, and, more importantly, open blocks for reading and writing. </p>
<p>Note again that the examples in this post are using a single DataNode, something you will not have on a real cluster. To that end, you will have to divide the total number of store files (as per the HBase metric) by the number of DataNodes you have. If you have, for example, a store file count of 1000, and your cluster has 10 DataNodes, then you should be OK with the default of 256 xceiver threads per DataNode.</p>
<p>The worst case would be the number of all active readers and writers, i.e. those that are currently sending or receiving data. But since this is hard to determine ahead of time, you might want to consider building in a decent reserve. Also, since the writing process needs an extra - although shorter lived - thread (for the <span style="font-family: 'Andale Mono';">PacketResponder</span>) you have to account for that as well. So a reasonable, but rather simplistic formula could be:</p>
<p>
<a href="http://3.bp.blogspot.com/-GOhrywXXbzI/T1_NsIyPC_I/AAAAAAAAACE/pnvMKo_0Ip0/s1600/HadoopHBaseXceiverFormula1.jpg" imageanchor="1" style="margin-bottom: 1em; margin-right: 1em;"><img border="0" width="800" src="http://3.bp.blogspot.com/-GOhrywXXbzI/T1_NsIyPC_I/AAAAAAAAACE/pnvMKo_0Ip0/s1600/HadoopHBaseXceiverFormula1.jpg" /></a>
</p>
<p>This formula takes into account that you need about two threads for an active writer and another for an active reader. This is then summed up and divided by the number of DataNodes, since you have to specify the "<span style="font-family: 'Andale Mono';">dfs.datanode.max.xcievers</span>" per DataNode.</p>
<p>If you loop back to the HBase RegionServer screenshot above, you saw that there were 22 store files. These are immutable and will only be read, or in other words occupy one thread only. For all memstores that are flushed to disk you need two threads - but only until they are fully written. The files are finalized and closed for good, cleaning up any thread in the process. So these come and go based on your flush frequency. Same goes for compactions, they will read N files and write them into a single new one, then finalize the new file. As for the write-ahead logs, these will occupy a thread once you have started to add data to any table. There is a log file per server, meaning that you can only have twice as many active threads for these files as you have RegionServers.</p>
<p>For a pure HBase setup (HBase plus its own HDFS, with no other user), we can estimate the number of needed <span style="font-family: 'Andale Mono';">DataXceiver</span>'s with the following formula:</p>
<p>
<a href="http://3.bp.blogspot.com/-YmMo1yYo1us/T1_NsceYoyI/AAAAAAAAACM/g4-xDmlWDnc/s1600/HadoopHBaseXceiverFormula2.jpg" imageanchor="1" style="margin-bottom: 1em; margin-right: 1em;"><img border="0" width="800" src="http://3.bp.blogspot.com/-YmMo1yYo1us/T1_NsceYoyI/AAAAAAAAACM/g4-xDmlWDnc/s1600/HadoopHBaseXceiverFormula2.jpg" /></a>
</p>
<p>Since you will be hard pressed to determine the <i>active</i> number of store files, flushes, and so on, it might be better to estimate the theoretical maximum instead. This maximum value takes into account that you can only have a single flush and compaction active per region at any time. The maximum number of logs you can have active matches the number of RegionServers, leading us to this formula:</p>
<p>
<a href="http://2.bp.blogspot.com/-XRuD3CedW2A/T1_Ns3M08xI/AAAAAAAAACU/JhssJgFLLmI/s1600/HadoopHBaseXceiverFormula3.jpg" imageanchor="1" style="margin-bottom: 1em; margin-right: 1em;"><img border="0" width="800" src="http://2.bp.blogspot.com/-XRuD3CedW2A/T1_Ns3M08xI/AAAAAAAAACU/JhssJgFLLmI/s1600/HadoopHBaseXceiverFormula3.jpg" /></a>
</p>
<p>Obviously, the number of store files will increase over time, and the number of regions typically as well. Same for the numbers of servers, so keep in mind to adjust this number over time. In practice, you can add a buffer of, for example, 20%, as shown in the formula below - in an attempt to not force you to change the value too often. </p>
<p>On the other hand, if you keep the number of regions fixed per server[9], and rather split them manually, while adding new servers as you grow, you should be able to keep this configuration property stable for each server.</p>
<p><u><b>Final Advice & TL;DR</b></u></p>
<p>Here is the final formula you want to use:</p>
<p>
<a href="http://1.bp.blogspot.com/-4SFWAUe7Yx8/T1_NtXskFNI/AAAAAAAAACc/QvN9pd4qWds/s1600/HadoopHBaseXceiverFormula4.jpg" imageanchor="1" style="margin-bottom: 1em; margin-right: 1em;"><img border="0" width="800" src="http://1.bp.blogspot.com/-4SFWAUe7Yx8/T1_NtXskFNI/AAAAAAAAACc/QvN9pd4qWds/s1600/HadoopHBaseXceiverFormula4.jpg" /></a>
</p>
<p>It computes the maximum number of threads needed, based on your current HBase vitals (no. of store files, regions, and region servers). It also adds a fudge factor of 20% to give you room for growth. Keep an eye on the numbers on a regular basis and adjust the value as needed. You might want to use Nagios with appropriate checks to warn you when any of the vitals goes over a certain percentage of change.</p>
<p>Note: Please make sure you also adjust the number of file handles your process is allowed to use accordingly[10]. This affects the number of sockets you can use, and if that number is too low (default is often 1024), you will get connection issues first. </p>
<p>Finally, the engineering devil on one of your shoulders should already have started to snicker about how horribly non-Erlang-y this is, and how you should use an event driven approach, possibly using Akka with Scala[11] - if you want to stay within the JVM world. Bear in mind though that the clever developers in the community share the same thoughts and have already started to discuss various approaches[12][13]. </p>
<p>
Links:
<ul>
<li>[1] <a href="http://old.nabble.com/Re%3A-xceiverCount-257-exceeds-the-limit-of-concurrent-xcievers-256-p20469958.html" id="" target="_blank">http://old.nabble.com/Re%3A-xceiverCount-257-exceeds-the-limit-of-concurrent-xcievers-256-p20469958.html</a></li>
<li>[2] <a href="http://ccgtech.blogspot.com/2010/02/hadoop-hdfs-deceived-by-xciever.html" id="" target="_blank">http://ccgtech.blogspot.com/2010/02/hadoop-hdfs-deceived-by-xciever.html</a></li>
<li>[3] <a href="https://issues.apache.org/jira/browse/HDFS-1861" id="" target="_blank">https://issues.apache.org/jira/browse/HDFS-1861</a> "Rename dfs.datanode.max.xcievers and bump its default value"</li>
<li>[4] <a href="https://issues.apache.org/jira/browse/HDFS-1866" id="" target="_blank">https://issues.apache.org/jira/browse/HDFS-1866</a> "Document dfs.datanode.max.transfer.threads in hdfs-default.xml"</li>
<li>[5] <a href="http://hbase.apache.org/book.html#dfs.datanode.max.xcievers" id="" target="_blank">http://hbase.apache.org/book.html#dfs.datanode.max.xcievers</a></li>
<li>[6] <a href="http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#threads_oom" id="" target="_blank">http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#threads_oom</a></li>
<li>[7] <a href="https://issues.apache.org/jira/browse/HDFS-200" id="" target="_blank">https://issues.apache.org/jira/browse/HDFS-200</a> "In HDFS, sync() not yet guarantees data available to the new readers"</li>
<li>[8] <a href="https://issues.apache.org/jira/browse/HDFS-265" id="" target="_blank">https://issues.apache.org/jira/browse/HDFS-265</a> "Revisit append"</li>
<li>[9] <a href="http://search-hadoop.com/m/CBBoV3z24H1" id="" target="_blank">http://search-hadoop.com/m/CBBoV3z24H1</a> "HBase, mail # user - region size/count per regionserver"</li>
<li>[10] <a href="http://hbase.apache.org/book.html#ulimit" id="" target="_blank">http://hbase.apache.org/book.html#ulimit</a> "ulimit and nproc"</li>
<li>[11] <a href="http://akka.io/" id="" target="_blank">http://akka.io/</a> "Akka"</li>
<li>[12] <a href="https://issues.apache.org/jira/browse/HDFS-223" id="" target="_blank">https://issues.apache.org/jira/browse/HDFS-223</a> "Asynchronous IO Handling in Hadoop and HDFS"</li>
<li>[13] <a href="https://issues.apache.org/jira/browse/HDFS-918" id="" target="_blank">https://issues.apache.org/jira/browse/HDFS-918</a> "Use single Selector and small thread pool to replace many instances of BlockSender for reads"</li>
</ul>
</p>Lars Georgehttp://www.blogger.com/profile/13990677998590435541noreply@blogger.com6tag:blogger.com,1999:blog-860423771829255614.post-91976681183651735862010-10-26T04:55:00.000-07:002010-10-26T05:21:31.447-07:00Hadoop on EC2 - A Primer<p>As more and more companies discover the power of Hadoop and how it solves complex analytical problems it seems that there is a growing interest to quickly prototype new solutions - possibly on short lived or "throw away" cluster setups. Amazon's EC2 provides an ideal platform for such prototyping and there are a lot of great resources on how this can be done. I would like to mention "<a href="http://www.cloudera.com/blog/2009/07/tracking-trends-with-hadoop-and-hive-on-ec2/">Tracking Trends with Hadoop and Hive on EC2</a>" on the Cloudera Blog by Pete Skomoroch and "<a href="http://developer.amazonwebservices.com/connect/entry.jspa?externalID=873">Running Hadoop MapReduce on Amazon EC2 and Amazon S3</a>" by Tom White. They give you full examples of how to process data stored on S3 using EC2 servers. Overall there seems to be a common need to quickly get insight into what a Hadoop and Hive based cluster can add in terms of business value. In this post I would like to take a step back though from the above full featured examples and show how you can use Amazon's services to set up an Hadoop cluster with the focus on the more "nitty gritty" details that are more difficult to find answers for.</p><h4>Starting a Cluster</h4><p>Let's jump into it head first and solve the problem of actually launching a cluster. You have heard that Hadoop is shipped with EC2 support, but how do you actually start up a Hadoop cluster on EC2? You do have a couple of choices and as Tom's article above explains you could start all instances in the cluster by hand. But why would you want to do that if there are scripts available that do all the work for you? And to complicate matters, how do you select the AMI (the Amazon Machine Image) that has the Hadoop version you need or want? Does it have Hive installed for your subsequent analysis of the collected data? Just running a check to count the available public Hadoop images returns 41!<br />
<br />
<pre class="brush:plain; gutter: false;">$ ec2-describe-images -a | grep hadoop | wc -l
41</pre><br />
That gets daunting very quickly. Sure you can roll your own - but that implies even more manual labor that you probably better spend on productive work. But there is help available...</p><p>By far one of the most popular way to install Hadoop today is using Cloudera's Distribution for Hadoop - also known as CDH. It packs all the tools you usually need into easy to install packages and pre-configures everything for typical workloads. Sweet! And since it also offers each "HStack" tool as a separate installable package you can decide what you need and install additional applications just as you need. We will make use of that feature below and also of other advanced configuration options.</p><p>There are not one but at least three script packages available to start a Hadoop cluster. The following table lists the most prominent ones:<br />
<br />
<table border="1" bordercolor="#000000" cellpadding="8" cellspacing="0" style="page-break-before: always;" width="95%"><thead>
<tr style="color: #FFFFFF; background-color: #000000; font-weight: bold"> <td>Name</td> <td>Vendor</td> <td>Language</td> <td>Fixed Packages</td> <td>Notes</td> </tr>
</thead> <tbody>
<tr> <td style="white-space: nowrap;"><a href="http://wiki.apache.org/hadoop/AmazonEC2">Hadoop EC2 Scripts</a></td> <td>Apache Hadoop</td> <td>Bash</td> <td>Yes</td> <td>Requires special Hadoop AMIs.</td> </tr>
<tr> <td style="white-space: nowrap;"><a href="https://wiki.cloudera.com/display/DOC/CDH+Cloud+Scripts">CDH Cloud Scripts</a></td> <td>Cloudera</td> <td>Python</td> <td>No</td> <td>Fixed to use CDH packages.</td> </tr>
<tr> <td style="white-space: nowrap;"><a href="http://incubator.apache.org/projects/whirr.html">Whirr</a></td> <td>Apache Whirr</td> <td>Python</td> <td>No</td> <td>Not yet on same level feature wise compared to CDH Cloud Scripts. Can run plain Apache Hadoop images as well as CDH. Supports multiple cloud vendors.</td> </tr>
</tbody> </table><br />
They are ordered by their availability date, so the first available was the Bash based "Hadoop EC2 Scripts" contribution packages. It is part of the Apache Hadoop tarball and can start selected AMIs with Hadoop preinstalled on them. While you may be able to customize the init script to install additional packages you are bound to whatever version of Hadoop the AMI provides. This limitation is overcome with the CDH Cloud Scripts and Apache Whirr, which is the successor to the CDH scripts. All three EC2 script packages were created by Cloudera's own Tom White, so you may notice similarities between them. In general you could say that each extends on the former while applying what has been learned during their usage in the real world. Also, Python has the advantage to run on Windows, Unix or Linux, because the Bash scripts are not a good fit for "some of these" (*cough*), but that seems obvious. </p><p>For the remainder of this post we will focus on the CDH Cloud Scripts as they are the current status quo when it comes to starting Hadoop on EC2 clusters. But please keep an eye on Whirr as it will supersede the CDH Cloud Scripts sooner or later - and added to the CDH releases subsequently (it is in CDH3B3 now!).</p><p>I have mentioned the various AMIs above and that (at the time of this post) there are at least 41 of them available providing support for Hadoop in one way or another. But why would you have to create your own images or switch to other ones as Hadoop is released in newer version in the future? Wouldn't it make more sense to have a base AMI that somehow magically bootstraps the Hadoop version you need onto the cluster as you materialize it? You may have guessed it by now: that is exactly what the Cloudera AMIs are doing! All of these scripts use a mechanism called <a href="http://docs.amazonwebservices.com/AWSEC2/2007-08-29/DeveloperGuide/AESDG-chapter-instancedata.html">Instance Data</a> which allows them to "hand in" configuration details to the AMI instances as they start. While the Hadoop EC2 Scripts only use this for limited configuration (and the rest being up to you - we will see an example of how that is done below) the CDH and Whirr scripts are employing this feature to bootstrap everything, including Hadoop. The instance data is a script called <code>hadoop-ec2-init-remote.sh</code> which is compressed and provided to the server as it starts. The trick is that the Cloudera AMIs have a mechanism to execute this script before starting the Hadoop daemons:<br />
<br />
<pre class="brush:plain; gutter: false;">root@ip-10-194-222-3:~# ls -la /etc/init.d/{hadoop,ec2}*
-rwxr-xr-x 1 root root 1395 2009-04-18 21:36 /etc/init.d/ec2-get-credentials
-rwxr-xr-x 1 root root 286 2009-04-18 21:36 /etc/init.d/ec2-killall-nash-hotplug
-rwxr-xr-x 1 root root 125 2009-04-18 21:36 /etc/init.d/ec2-mkdir-tmp
-rwxr--r-- 1 root root 1945 2009-06-23 14:37 /etc/init.d/ec2-run-user-data
-rw-r--r-- 1 root root 709 2009-04-18 21:36 /etc/init.d/ec2-ssh-host-key-gen
-rwxr-xr-x 1 root root 4280 2010-03-22 06:19 /etc/init.d/hadoop-0.20-datanode
-rwxr-xr-x 1 root root 4296 2010-03-22 06:19 /etc/init.d/hadoop-0.20-jobtracker
-rwxr-xr-x 1 root root 4437 2010-03-22 06:19 /etc/init.d/hadoop-0.20-namenode
-rwxr-xr-x 1 root root 4352 2010-03-22 06:19 /etc/init.d/hadoop-0.20-secondarynamenode
-rwxr-xr-x 1 root root 4304 2010-03-22 06:19 /etc/init.d/hadoop-0.20-tasktracker</pre><br />
and<br />
<br />
<pre class="brush:plain; gutter: false;">root@ip-10-194-222-3:~# ls -la /etc/rc2.d/*{hadoop,ec2}*
lrwxrwxrwx 1 root root 32 2010-09-13 12:32 /etc/rc2.d/S20hadoop-0.20-jobtracker -> ../init.d/hadoop-0.20-jobtracker
lrwxrwxrwx 1 root root 30 2010-09-13 12:32 /etc/rc2.d/S20hadoop-0.20-namenode -> ../init.d/hadoop-0.20-namenode
lrwxrwxrwx 1 root root 39 2010-09-13 12:32 /etc/rc2.d/S20hadoop-0.20-secondarynamenode -> ../init.d/hadoop-0.20-secondarynamenode
lrwxrwxrwx 1 root root 29 2009-06-23 14:58 /etc/rc2.d/S70ec2-get-credentials -> ../init.d/ec2-get-credentials
lrwxrwxrwx 1 root root 27 2009-06-23 14:58 /etc/rc2.d/S71ec2-run-user-data -> ../init.d/ec2-run-user-data</pre><br />
work their magic to get the user data (which is one part of the "Instance Data") and optionally decompress it before executing the script handed in. The only other requirement is that the AMI must have Java installed as well. As we look into further pieces of the puzzle we will get back to this init script. For now let it suffice to say that it does the bootstrapping of our instances and installs whatever we need dynamically during the start of the cluster.<br />
<br />
Note: I am using the Ubuntu AMIs for all examples and code snippets in this post.</p><h4>All about the options</h4><p>First you need to <a href="http://docs.cloudera.com/display/DOC/Installing+the+CDH+Cloud+Scripts">install</a> the CDH Cloud Scripts, which is rather straight forward. For example, first install the Cloudera CDH tarball:<br />
<br />
<pre class="brush:plain; gutter: false;">$ wget http://archive.cloudera.com/cdh/2/hadoop-0.20.1+169.89.tar.gz
$ tar -zxvf hadoop-0.20.1+169.89.tar.gz
$ export PATH=$PATH:~/hadoop-0.20.1+169.89/src/contrib/cloud/src/py</pre><br />
Then install the required Python libs, we assume you have Python already installed in this example:<br />
<br />
<pre class="brush:plain; gutter: false;">$ sudo apt-get install python-setuptools
$ sudo easy_install "simplejson==2.0.9"
$ sudo easy_install "boto==1.8d"</pre><br />
Now you are able to run the CDH Cloud Scripts - but to be really useful you need to configure them first. Cloudera has <a href="http://docs.cloudera.com/display/DOC/Configuring+and+Running+CDH+Cloud+Scripts">document</a> that explains the details. Obviously while using those scripts a few more ideas come up and are added subsequently. Have a look at this example <code>.hadoop-cloud</code> directory:<br />
<br />
<pre class="brush:plain; gutter: false;">$ ls -lA .hadoop-cloud/
total 40
-rw-r--r-- 1 lars lars 489 2010-09-13 09:52 clusters-c1.medium.cfg
-rw-r--r-- 1 lars lars 358 2010-09-10 06:13 clusters-c1.xlarge.cfg
lrwxrwxrwx 1 lars lars 22 2010-08-22 06:04 clusters.cfg -> clusters-c1.medium.cfg
-rw-r--r-- 1 lars lars 17601 2010-09-13 10:19 hadoop-ec2-init-remote-cdh2.sh
drwxr-xr-x 2 lars lars 4096 2010-08-15 14:14 lars-test-cluster/</pre><br />
You can see that it has multiple <code>clusters.cfg</code> configuration files that differ only in their settings for the <code>image_id</code> (the AMI to be used) and the <code>instance_type</code>. Here is is one of those files:<br />
<br />
<pre class="brush:plain; gutter: false;">$ cat .hadoop-cloud/clusters-c1.medium.cfg
[lars-test-cluster]
image_id=ami-ed59bf84
instance_type=c1.medium
key_name=root
availability_zone=us-east-1a
private_key=~/.ec2/root.pem
ssh_options=-i %(private_key)s -o StrictHostKeyChecking=no
user_data_file=http://archive.cloudera.com/cloud/ec2/cdh2/hadoop-ec2-init-remote.sh
user_packages=lynx s3cmd
env=AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key></pre><br />
All you have to do now is switch the symbolic link to run either cluster setup. Obviously another option would be to use the <a href="https://docs.cloudera.com/display/DOC/Using+Command+Line+Options">command line options</a> which <code>hadoop-ec2</code> offers. Execute <code>$ hadoop-ec2 launch-cluster --help</code> to see what is available. You can override the values from the current clusters.cfg or even select a completely different configuration directory. Personally I like the symlink approach as this allows me to keep the settings for each cluster instance together in a separate configuration file - but a usual, the choice is yours. You could also save each <code>hadoop-ec2</code> call in a small Bash script along with all command line options in it.</p><p>Back to the <code>.hadoop-cloud</code> directory above. There is another file <code>hadoop-ec2-init-remote-cdh2.sh</code> (see below) and a directory called <code>lars-test-cluster</code>, which is created and maintained by the CDH Cloud Scripts. It contains a local <code>hadoop-site.xml</code> with your current AWS credentials (assuming you have them set in your <code>.profile</code> as per the documentation) that you can use to access S3 from your local Hadoop scripts. </p><p>For the sake of completeness here the other cluster configuration file:<br />
<br />
<pre class="brush:plain; gutter: false;">$ cat .hadoop-cloud/clusters-c1.xlarge.cfg
[lars-test-cluster]
image_id=ami-8759bfee
instance_type=c1.xlarge
key_name=root
availability_zone=us-east-1a
private_key=~/.ec2/root.pem
ssh_options=-i %(private_key)s -o StrictHostKeyChecking=no
user_data_file=http://archive.cloudera.com/cloud/ec2/cdh2/hadoop-ec2-init-remote.sh
user_packages=lynx s3cmd
env=AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key></pre><br />
The <code>user_data_file</code> is where the version of Hadoop and here even Cloudera's Distribution for Hadoop is chosen. You can replace the link with <br />
<br />
<pre class="brush:plain; gutter: false;">user_data_file=http://archive.cloudera.com/cloud/ec2/cdh3/hadoop-ec2-init-remote.sh</pre><br />
to use the newer CDH3, currently in beta.<br />
<br />
Also note that the AMIs are currently <em>only</em> available in the <code>us-east-x</code> zones and not in any of the others.<br />
</p><p>To conclude the setup, here a list of possible configuration options:<br />
<br />
<table border="1" bordercolor="#000000" cellpadding="8" cellspacing="0" style="page-break-before: always;" width="95%"><thead>
<tr style="color: #FFFFFF; background-color: #000000; font-weight: bold"> <td>Option</td> <td>CLI</td> <td>Description</td> </tr>
</thead> <tbody>
<tr> <td style="white-space: nowrap;">cloud_provider</td> <td style="white-space: nowrap;">--cloud-provider</td> <td>The cloud provider, e.g. 'ec2' for Amazon EC2.</td> </tr>
<tr> <td style="white-space: nowrap;">auto_shutdown</td> <td style="white-space: nowrap;">--auto-shutdown</td> <td>The time in minutes after launch when an instance will be automatically shut down.</td> </tr>
<tr> <td style="white-space: nowrap;">image_id</td> <td style="white-space: nowrap;">--image-id</td> <td>The ID of the image to launch.</td> </tr>
<tr> <td style="white-space: nowrap;">instance_type</td> <td style="white-space: nowrap;">-t | --instance-type</td> <td>The type of instance to be launched. One of m1.small, m1.large, m1.xlarge, c1.medium, or c1.xlarge.</td> </tr>
<tr> <td style="white-space: nowrap;">key_name</td> <td style="white-space: nowrap;">-k | --key-name</td> <td>The key pair to use when launching instances. (Amazon EC2 only.)</td> </tr>
<tr> <td style="white-space: nowrap;">availability_zone</td> <td style="white-space: nowrap;">-z | --availability-zone</td> <td>The availability zone to run the instances in.</td> </tr>
<tr> <td style="white-space: nowrap;">private_key</td> <td style="white-space: nowrap;"></td> <td>Used with <code>update-slaves-file</code> command. The file is copied to all EC2 servers.</td> </tr>
<tr> <td style="white-space: nowrap;">ssh_options</td> <td style="white-space: nowrap;">--ssh-options</td> <td>SSH options to use.</td> </tr>
<tr> <td style="white-space: nowrap;">user_data_file</td> <td style="white-space: nowrap;">-f | --user-data-file</td> <td>The URL of the file containing user data to be made available to instances.</td> </tr>
<tr> <td style="white-space: nowrap;">user_packages</td> <td style="white-space: nowrap;">-p | --user-packages</td> <td>A space-separated list of packages to install on instances on start up.</td> </tr>
<tr> <td style="white-space: nowrap;">env</td> <td style="white-space: nowrap;">-e | --env</td> <td>An environment variable to pass to instances. (May be specified multiple times.)</td> </tr>
<tr> <td style="white-space: nowrap;"></td> <td style="white-space: nowrap;">--client-cidr</td> <td>The CIDR of the client, which is used to allow access through the firewall to the master node. (May be specified multiple times.)</td> </tr>
<tr> <td style="white-space: nowrap;"></td> <td style="white-space: nowrap;">--security-group</td> <td>Additional security groups within which the instances should be run. (Amazon EC2 only.) (May be specified multiple times.)</td> </tr>
</tbody> </table></p><h4>Custom initialization</h4><p>Now you can configure and start a cluster up on EC2. Sooner or later though you are facing more challenging issues. One that hits home early on is compression. You are encouraged to use compression in Hadoop as it saves you storage needed but also bandwidth as less data needs to be transferred over the wire. See <a href="http://www.cloudera.com/blog/2009/06/parallel-lzo-splittable-compression-for-hadoop/">this</a> and <a href="http://www.cloudera.com/blog/2009/11/hadoop-at-twitter-part-1-splittable-lzo-compression/">this</a> post for "subtle" hints. Cool, so let's switch on compression - must be easy, right? Well, not exactly. For starters choosing the appropriate codec is not trivial. A very popular one is LZO as described in the posts above because it has many advantage in combination with Hadoop's MapReduce. Problem is that it is <a href="http://code.google.com/p/hadoop-gpl-compression/">GPL licensed</a> and therefore not shipped with Hadoop. You actually have to compile it yourself to be able to install it subsequently. How this is done is described <a href="http://github.com/toddlipcon/hadoop-lzo-packager">here</a>. You need to follow those steps and compile an installable package on all AMI's you want to use later. For example, log into the master of your EC2 Hadoop cluster and execute the following commands:<br />
<br />
<pre class="brush:plain; gutter: false;">$ hadoop-ec2 login <your-cluster-name>
# cd ~
# apt-get install subversion devscripts ant git-core liblzo2-dev
# git clone http://github.com/toddlipcon/hadoop-lzo-packager.git
# cd hadoop-lzo-packager/
# SKIP_RPM=1 ./run.sh
# build/deb/
# dpkg -i toddlipcon-hadoop-lzo_20100913142659.20100913142512.6ddda26-1_i386.deb </pre><br />
Note: Since I am running Ubuntu AMIs I used the <code>SKIP_RPM=1</code> flag to skip RedHat package generation. <br />
<br />
Copy the final .deb file to a save location naming it <code>hadoop-lzo_i368.deb</code> or <code>hadoop-lzo_amd64.deb</code> using <code>scp</code> for example. Obviously do the same for the yum packages if you are preferring the Fedora AMIs. <br />
<br />
The next step is to figure out how to install the packages we just built during the bootstrap process described above. This is where the <code>user_data_file</code> comes back into play. Instead of copying the .deb packages we save them on S3 instead, using a tool like <a href="http://s3tools.org/s3cmd">s3cmd</a>. For example:<br />
<br />
<pre class="brush:plain; gutter: false;">$ s3cmd put hadoop-lzo_i386.deb s3://dpkg/</pre><br />
Now we can switch from the default init script to our own. Use <code>wget</code> to download the default file <br />
<br />
<pre class="brush:plain; gutter: false;">$ wget http://archive.cloudera.com/cloud/ec2/cdh2/hadoop-ec2-init-remote.sh
$ mv hadoop-ec2-init-remote.sh .hadoop-cloud/hadoop-ec2-init-remote-cdh2.sh</pre><br />
In the <code>clusters.cfg</code> we need to replace the link with our local file like so<br />
<br />
<pre class="brush:plain; gutter: false;">user_data_file=file:///home/lars/.hadoop-cloud/hadoop-ec2-init-remote-cdh2.sh</pre><br />
Note that we cannot use <code>~/.hadoop-cloud/...</code> as the filename because the Python code does not resolve the Bash file path syntax.<br />
<br />
The local init script can now be adjusted as needed, here we are adding the functions to set up <code>s3cmd</code> and then install the LZO packages subsequently on server startup:<br />
<br />
<pre class="brush:shell;">...
fi
service $HADOOP-$daemon start
}
function install_s3cmd() {
install_packages s3cmd # needed for LZO package on S3
cat > /tmp/.s3cfg << EOF
[default]
access_key = $AWS_ACCESS_KEY_ID
acl_public = False
bucket_location = US
cloudfront_host = cloudfront.amazonaws.com
cloudfront_resource = /2008-06-30/distribution
default_mime_type = binary/octet-stream
delete_removed = False
dry_run = False
encoding = UTF-8
encrypt = False
force = False
get_continue = False
gpg_command = /usr/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase =
guess_mime_type = True
host_base = s3.amazonaws.com
host_bucket = %(bucket)s.s3.amazonaws.com
human_readable_sizes = False
list_md5 = False
preserve_attrs = True
progress_meter = True
proxy_host =
proxy_port = 0
recursive = False
recv_chunk = 4096
secret_key = $AWS_SECRET_ACCESS_KEY
send_chunk = 4096
simpledb_host = sdb.amazonaws.com
skip_existing = False
urlencoding_mode = normal
use_https = False
verbosity = WARNING
EOF
}
function install_hadoop_lzo() {
INSTANCE_TYPE=`wget -q -O - http://169.254.169.254/latest/meta-data/instance-type`
case $INSTANCE_TYPE in
m1.large|m1.xlarge|m2.xlarge|m2.2xlarge|m2.4xlarge|c1.xlarge|cc1.4xlarge)
HADOOP_LZO="hadoop-lzo_amd64"
;;
*)
HADOOP_LZO="hadoop-lzo_i386"
;;
esac
if which dpkg &> /dev/null; then
HADOOP_LZO_FN=${HADOOP_LZO}.deb
s3cmd -c /tmp/.s3cfg get --force s3://dpkg/$HADOOP_LZO_FN /tmp/$HADOOP_LZO_FN
dpkg -i /tmp/$HADOOP_LZO_FN
elif which rpm &> /dev/null; then
# todo
echo "do it yum style..."
fi
}
register_auto_shutdown
update_repo
install_user_packages
install_s3cmd
install_hadoop
install_hadoop_lzo
configure_hadoop</pre><br />
By the way, once a cluster is up you can verify what the user data script did (or even "is doing" if you log in promptly) by checking the <code>/var/log/messages</code> file on for example the Hadoop master node:<br />
<br />
<pre class="brush:shell;">$ hadoop-ec2 login lars-test-cluster
# cat /var/log/messages
...
Sep 14 12:10:55 ip-10-242-18-80 user-data: + install_hadoop_lzo
Sep 14 12:10:55 ip-10-242-18-80 user-data: ++ wget -q -O - http://169.254.169.254/latest/meta-data/instance-type
Sep 14 12:10:55 ip-10-242-18-80 user-data: + INSTANCE_TYPE=c1.medium
Sep 14 12:10:55 ip-10-242-18-80 user-data: + case $INSTANCE_TYPE in
Sep 14 12:10:55 ip-10-242-18-80 user-data: + HADOOP_LZO=hadoop-lzo_i386
Sep 14 12:10:55 ip-10-242-18-80 user-data: + which dpkg
Sep 14 12:10:55 ip-10-242-18-80 user-data: + HADOOP_LZO_FN=hadoop-lzo_i386.deb
Sep 14 12:10:55 ip-10-242-18-80 user-data: + s3cmd -c /tmp/.s3cfg get --force s3://dpkg/hadoop-lzo_i386.deb /tmp/hadoop-lzo_i386.deb
Sep 14 12:10:55 ip-10-242-18-80 user-data: Object s3://dpkg/hadoop-lzo_i386.deb saved as '/tmp/hadoop-lzo_i386.deb' (65810 bytes in 0.0 seconds, 5.06 MB/s)
Sep 14 12:10:56 ip-10-242-18-80 user-data: + dpkg -i /tmp/hadoop-lzo_i386.deb
Sep 14 12:10:56 ip-10-242-18-80 user-data: Selecting previously deselected package toddlipcon-hadoop-lzo.
Sep 14 12:10:56 ip-10-242-18-80 user-data: (Reading database ... 24935 files and directories currently installed.)
Sep 14 12:10:56 ip-10-242-18-80 user-data: Unpacking toddlipcon-hadoop-lzo (from /tmp/hadoop-lzo_i386.deb) ...
Sep 14 12:10:56 ip-10-242-18-80 user-data: Setting up toddlipcon-hadoop-lzo (20100913142659.20100913142512.6ddda26-1) ...
...</pre><br />
Note: A quick tip in case you edit the init script yourself and are going to add configuration data that is output to a file using <code>cat</code> (see <code>cat > /tmp/.s3cfg << EOF</code> above): make sure that the final "EOF" has <u>NO</u> trailing whitespaces or the script fails miserably. I had "EOF " (note the trailing space) as opposed to "EOF" and it took me a while to find that! The script would fail to run with an "unexpected end of file" error.<br />
</p><p>A comment on <a href="http://aws.amazon.com/elasticmapreduce/">EMR</a> (or Elastic MapReduce), Amazon's latest offering in regards to Hadoop support. It is a wrapper around launching a cluster on your behalf and executing MapReduce jobs or Hive queries etc. While this will help many to be up and running with "cloud based" MapReduce work it has also a few drawbacks: for starters you have to work with what you are given in regards to Hadoop versioning. You have to rely on Amazon to keep it current and any "special" version you would like to try may not work at all. Furthermore you have no option to install LZO as described above, i.e. the whole bootstrap process is automated and not accessible to you for modifications. And finally, you pay for it on top of the standard EC2 rates, so it comes at a premium.</p><h4>Provision data</h4><p>We touched S3 already above but let me get back to it for a moment. Small files like the installation packages are no issue at all obviously. What is a problem though is when you have to deal with huge files larger than the implicit 5GB maximum file size S3 allows. You have two choices here, either split the files into smaller ones or use an IO layer that does that same task for you. That feature is built right into Hadoop itself. This is of course <a href="http://wiki.apache.org/hadoop/AmazonS3">documented</a> but let me add a few notes that may help understand the implications a little bit better. First here a table comparing the different tools you can use:<br />
<br />
<table border="1" bordercolor="#000000" cellpadding="8" cellspacing="0" style="page-break-before: always;" width="95%"><thead>
<tr style="color: #FFFFFF; background-color: #000000; font-weight: bold"> <td style="white-space: nowrap;">Tool Name</td> <td>Supported</td> <td>Description</td> </tr>
</thead> <tbody>
<tr> <td>s3cmd</td> <td>s3</td> <td>Supports access to S3 as provided by the AWS API's and also the S3 Management Console over the web.</td> </tr>
<tr> <td>hadoop</td> <td style="white-space: nowrap;">s3, s3n</td> <td>Supports raw or direct S3 access as well as a specialized Hadoop filesystem on S3.</td> </tr>
</tbody> </table><br />
The thing that is not obvious initially is that "s3cmd get s3://..." is <u>not</u> the same as "hadoop fs -get s3://...". When you use a standard tool that implements the S3 API like <code>s3cmd</code> then you use <code>s3://<bucket-name>/...</code> as the object/file URI. In Hadoop terms that is referred to as "raw" or "native" S3. And if you want to use Hadoop to access a file on S3 in that mode then the URI is <code>s3n://<bucket-name>/...</code> - note the "s3n" URI scheme. In contrast, if you use the "s3" scheme with Hadoop it employs a special file system mode that stores the large files in smaller binary files on S3 completely transparent to the user. For example:<br />
<br />
<pre class="brush:shell;">$ hadoop fs -put verylargefile.log s3://my-bucket/logs/20100916/
$ s3cmd ls s3://my-bucket/
DIR s3://my-bucket//
2010-09-16 07:44 33554432 s3://my-bucket/block_-1289596344515350280
2010-09-16 07:45 33554432 s3://my-bucket/block_-15869508987376965
2010-09-16 07:46 33554432 s3://my-bucket/block_-172539355612092125
2010-09-16 07:45 33554432 s3://my-bucket/block_-1894993863630732603
2010-09-16 07:43 33554432 s3://my-bucket/block_-2049322783060796466
2010-09-16 07:51 33554432 s3://my-bucket/block_-2070316024499434597
2010-09-16 07:43 33554432 s3://my-bucket/block_-2107321687364706212
2010-09-16 07:46 33554432 s3://my-bucket/block_-2117877727016155804
...</pre><br />
The following table provides a comparison between the various access modes and their file size limitations:<br />
<br />
<table border="1" bordercolor="#000000" cellpadding="8" cellspacing="0" style="page-break-before: always;" width="95%"><thead>
<tr style="color: #FFFFFF; background-color: #000000; font-weight: bold"> <td>Type</td> <td>Mode</td> <td>Limit</td> <td>Example</td> </tr>
</thead> <tbody>
<tr> <td style="white-space: nowrap;">S3 API</td> <td>native</td> <td>5GB</td> <td>s3cmd get s3://my-bucket/my-s3-dir/my-file.name</td> </tr>
<tr> <td>Hadoop</td> <td>native</td> <td>5GB</td> <td>hadoop fs -get s3n://my-bucket/my-s3-dir/my-file.name</td> </tr>
<tr> <td>Hadoop</td> <td style="white-space: nowrap;">binary blocks</td> <td>unlimited</td> <td>hadoop fs -get s3://my-bucket/my-hadoop-path/my-file.name</td> </tr>
</tbody> </table><br />
You may now ask yourself which one to use. If you will never deal with very large files it may not matter. But if you do, then you need to decide if you use Hadoop's binary filesystem or chop files yourself to fit into 5GB. Also keep in mind that once you upload files using Hadoop's binary filesystem then you can <u>NOT</u> go back to the native tools as the files stored in your S3 bucket are named (seemingly) randomly and content is spread across many of those smaller files as can be seen in the example above. There is no direct way to parse these files yourself outside of Hadoop. <br />
<br />
One final note on S3 and provisioning data: it seems it makes more sense to copy data from S3 into HDFS before running a job not just because of the improved IO performance (keyword here: data locality!) but also in regards to stability. I have seen jobs fail that read directly from S3 but succeeded happily when reading from HDFS. And copying data from S3 to EC2 is free, so you may want to try your luck with either option and see what is best for your use-case.<br />
</p><h4>ETL and Processing</h4><p>The last part in a usual workflow is to process the data we now have nicely compressed and splittable up on S3 or HDFS. This would ideally be Hive queries if the data is already in a "Hive ready" format. Often though the data comes from legacy resources and needs to be processed before it can be queried. This process is usually referred to as <a href="http://en.wikipedia.org/wiki/Extract,_transform,_load">Extract, transform, load</a> or abbreviated as "ETL". It can be comprised of various steps executing dedicated applications or scripts pruning and transforming raw input files. I will leave this for another post though as this points to the same problem we addressed above: you have many tools you could use and have to decide which suits you best. There is <a href="http://kettle.pentaho.com/">Kettle</a> or <a href="http://static.springsource.org/spring-batch/">Spring Batch</a> and also the new kid on the block <a href="http://yahoo.github.com/oozie/">Oozie</a>. Some combine both steps while Oozie for example concentrates on the workflow aspect.>/p> <br />
<p>This is particularly interesting as we can use Oozie to spin up our EC2 clusters as well as run the ETL job (which could be a Kettle job for example), followed by the Hive queries. Add <a href="http://github.com/cloudera/sqoop">Sqoop</a> and you have a tool to read and write from legacy database in the process. But as I said, I leave this whole topic for a follow up post. But I do believe this is important to understand and document the full process of running Hadoop in the "cloud". Only then you have the framework to run the full business process on Amazons Web Services (or any other cloud computing provider).</p><h4>Conclusion</h4><p>With Whirr being released it seems like the above may become somewhat obsolete soon. I will look into Whirr more and update the post to show you how the same is achieved. My preliminary investigation shows though that you have the same issues - or say "advanced challenges" to be fair as Whirr is not at fault here. Maybe one day we have an Apache licensed <a href="https://issues.apache.org/jira/browse/HADOOP-6349">alternative</a> to LZO available and installing a suitable compression codec will be much easier. For now this is not the case.<br />
<br />
Another topic we have not touched upon is local storage in EC2. Usually you have an attached volume that is destroyed once the instance is shut down. To get around this restriction you can create Snapshots and mount them as <a href="http://aws.amazon.com/ebs/">Elastic Block Storage</a> (or EBS) which are persisted across server restarts. They are also supposedly faster than the default volume. This is yet another interesting topic I am planning to post about as especially write performance in EC2 is really, really bad - and that may affect the above ETL process in unsuspected ways. But on the other hand you get persistency and being able to start and stop a cluster while retaining the data it had stored. The CDH Cloud Scripts have full support for EBS while Whirr is said to not have that working yet (although <a href="https://issues.apache.org/jira/browse/WHIRR-3">WHIRR-3</a> seems to say it is implemented).</p><p>Let me know if you are interested in a particular topic regarding this post and which I may not have touched upon. I am curious to hear what you are doing with Hadoop on EC2, so please drop me a note.</p>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-860423771829255614.post-35217361650628253062010-05-28T10:27:00.000-07:002010-05-28T11:00:23.614-07:00HBase File Locality in HDFSOne of the more ambiguous things in <a href="http://hadoop.apache.org/common/">Hadoop</a> is block replication: it happens automatically and you should not have to worry about it. <a href="http://hadoop.apache.org/hbase/">HBase</a> relies on it 100% to provide the data safety as it stores its files into the <a href="http://www.larsgeorge.com/2009/10/hbase-architecture-101-storage.html">distributed file system</a>. While that works completely transparent, one of the more advanced questions asked though is how does this affect performance? This usually arises when the user starts writing <a href="http://hadoop.apache.org/mapreduce/">MapReduce</a> jobs against either HBase or Hadoop directly. Especially with larger data being stored in HBase, how does the system take care of placing the data close to where it is needed? This is referred to data locality and in case of HBase using the Hadoop file system (HDFS) there may be doubts how that is working. <br />
<br />
First let's see how Hadoop handles this. The MapReduce documentation advertises the fact that tasks run close to the data they process. This is achieved by breaking up large files in HDFS into smaller chunks, or so called blocks. That is also the reason why the block size in Hadoop is much larger than you may know them from operating systems and their file systems. Default setting is 64MB, but usually 128MB is chosen, if not even larger when you are sure all your files are larger than a single block in size. Each block maps to a task run to process the contained data. That also means larger block sizes equal fewer map tasks to run as the number of mappers is driven by the number of blocks that need processing. Hadoop knows where blocks are located and runs the map tasks directly on the node that hosts it (actually one of them as replication means it has a few hosts to chose from). This is how it guarantees data locality during MapReduce.<br />
<br />
Back to HBase. When you have arrived at that point with Hadoop and you now understand that it can process data locally you start to question how this may work with HBase. If you have read my <a href="http://www.larsgeorge.com/2009/10/hbase-architecture-101-storage.html">post</a> on HBase's storage architecture you saw that HBase simply stores files in HDFS. It does so for the actual data files (HFile) as well as its log (WAL). And if you look into the code it simply uses <code>FileSystem.create(Path path)</code> to create these. When you then consider two access patterns, a) direct random access and b) MapReduce scanning of tables, you wonder if care was taken that the HDFS blocks are close to where they are read by HBase. <br />
<br />
One thing upfront, if you do not co-share your cluster with Hadoop and HBase but instead employ a separate Hadoop as well as a stand-alone HBase cluster then there is <u>no</u> data locality - and it can't be. That equals to running a separate MapReduce cluster where it would not be able to execute tasks directly on the datanode. It is imperative for data locality to have them running on the same cluster, Hadoop (as in the HDFS), MapReduce and HBase. End of story. <br />
<br />
OK, you them all co-located on a single (hopefully larger) cluster? Then read on. How does Hadoop figure out where data is located as HBase accesses it. Remember the access pattern above, both go through a single piece of software called a RegionServer. Case a) uses random access patterns while b) scans all contiguous rows of a table but does so through the same API. As explained in my referenced post and mentioned above, HBase simply stores files and those get distributed as replicated blocks across all data nodes of the HDFS. Now imagine you stop HBase after saving a lot of data and restarting it subsequently. The region servers are restarted and assign a seemingly random number of regions. At this very point there is no data locality guaranteed - how could it be?<br />
<br />
The most important factor is that HBase is not restarted frequently and that it performs house keeping on a regular basis. These so called compactions rewrite files as new data is added over time. All files in HDFS once written are immutable (for all sorts of reasons). Because of that, data is written into new files and as their number grows HBase compacts them into another set of new, consolidated files. And here is the kicker: HDFS is smart enough to put the data where it is needed! How does that work you ask? We need to take a deep dive into Hadoop's source code and see how the above <code>FileSystem.create(Path path)</code> that HBase uses works. We are running on HDFS here, so we are actually using <code>DistributedFileSystem.create(Path path)</code> which looks like this:<br />
<pre class="brush:plain; gutter: false;">public FSDataOutputStream create(Path f) throws IOException {
return create(f, true);
}</pre>It returns a <code>FSDataOutputStream</code> and that is create like so:<br />
<pre class="brush:plain; gutter: false;">public FSDataOutputStream create(Path f, FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, Progressable progress) throws IOException {
return new FSDataOutputStream(dfs.create(getPathName(f), permission, overwrite, replication, blockSize, progress, bufferSize), statistics);
}</pre>It uses a <code>DFSClient</code> instance that is the "umbilical" cord connecting the client with the NameNode:<br />
<pre class="brush:plain; gutter: false;">this.dfs = new DFSClient(namenode, conf, statistics);</pre>What is returned though is a <code>DFSClient.DFSOutputStream</code> instance. As you write data into the stream the <code>DFSClient</code> aggregates it into "packages" which are then written as blocks to the data nodes. This happens in <code>DFSClient.DFSOutputStream.DataStreamer</code> (please hang in there, we are close!) which runs as a daemon thread in the background. The magic unfolds now in a few hops on the stack, first in the daemon <code>run()</code> it gets the list of nodes to store the data on:<br />
<pre class="brush:plain; gutter: false;">nodes = nextBlockOutputStream(src);</pre>This in turn calls:<br />
<pre class="brush:plain; gutter: false;">long startTime = System.currentTimeMillis();
lb = locateFollowingBlock(startTime);
block = lb.getBlock();
nodes = lb.getLocations();</pre>We follow further down and see that <code>locateFollowingBlocks()</code> calls:<br />
<pre class="brush:plain; gutter: false;">return namenode.addBlock(src, clientName);</pre>Here is where it all comes together. The name node is called to add a new block and the <code>src</code> parameter indicates for what file, while <code>clientName</code> is the name of the <code>DFSClient</code> instance. I skip one more small method in between and show you the next bigger step involved:<br />
<pre class="brush:plain; gutter: false;">public LocatedBlock getAdditionalBlock(String src, String clientName) throws IOException {
...
INodeFileUnderConstruction pendingFile = checkLease(src, clientName);
...
fileLength = pendingFile.computeContentSummary().getLength();
blockSize = pendingFile.getPreferredBlockSize();
clientNode = pendingFile.getClientNode();
replication = (int)pendingFile.getReplication();
// choose targets for the new block tobe allocated.
DatanodeDescriptor targets[] = replicator.chooseTarget(replication, clientNode, null, blockSize);
...
}</pre>We are finally getting to the core of this code in the <code>replicator.chooseTarget()</code> call:<br />
<pre class="brush:plain; gutter: false;">private DatanodeDescriptor chooseTarget(int numOfReplicas, DatanodeDescriptor writer, List<Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeDescriptor> results) {
if (numOfReplicas == 0 || clusterMap.getNumOfLeaves()==0) {
return writer;
}
int numOfResults = results.size();
boolean newBlock = (numOfResults==0);
if (writer == null && !newBlock) {
writer = (DatanodeDescriptor)results.get(0);
}
try {
switch(numOfResults) {
case 0:
writer = chooseLocalNode(writer, excludedNodes, blocksize, maxNodesPerRack, results);
if (--numOfReplicas == 0) {
break;
}
case 1:
chooseRemoteRack(1, results.get(0), excludedNodes, blocksize, maxNodesPerRack, results);
if (--numOfReplicas == 0) {
break;
}
case 2:
if (clusterMap.isOnSameRack(results.get(0), results.get(1))) {
chooseRemoteRack(1, results.get(0), excludedNodes, blocksize, maxNodesPerRack, results);
} else if (newBlock) {
chooseLocalRack(results.get(1), excludedNodes, blocksize, maxNodesPerRack, results);
} else {
chooseLocalRack(writer, excludedNodes, blocksize, maxNodesPerRack, results);
}
if (--numOfReplicas == 0) {
break;
}
default:
chooseRandom(numOfReplicas, NodeBase.ROOT, excludedNodes, blocksize, maxNodesPerRack, results);
}
} catch (NotEnoughReplicasException e) {
FSNamesystem.LOG.warn("Not able to place enough replicas, still in need of " + numOfReplicas);
}
return writer;
}</pre>Recall that we have started with the <code>DFSClient</code> and created a file which was subsequently filled with data. As the blocks need writing out the above code checks first if that can be done on the same host that the client is on, i.e. the "writer". That is "case 0". In "case 1" the code tries to find a remote rack to have a distant replication of the block. Lastly is fills the list of required replicas with local or machines of another rack. <br />
<br />
So this means for HBase that as the region server stays up for long enough (which is the default) that after a major compaction on all tables - which can be invoked manually or is triggered by a configuration setting - it has the files local on the same host. The data node that shares the same physical host has a copy of all data the region server requires. If you are running a scan or get or any other use-case you can be sure to get the best performance.<br />
<br />
Finally a good overview over the HDFS design and data replication can be found <a href="http://hadoop.apache.org/common/docs/r0.20.2/hdfs_design.html#Data+Replication">here</a>. Also note that the HBase team is working on redesigning how the Master is assigning the regions to servers. The plan is to improve it so that regions are deployed on the server where most blocks are. This will particularly be useful after a restart because it would guarantee a better data locality right off the bat. Stay tuned!Unknownnoreply@blogger.com11tag:blogger.com,1999:blog-860423771829255614.post-60867408848071292342010-05-14T08:05:00.000-07:002010-05-14T08:21:39.317-07:00Minimal Katta Lucene ClientA quick post explaining how a minimal <a href="http://katta.sourceforge.net/">Katta</a> Lucene Client is set up. I found this was sort of missing from the Katta site and documentation and since I ran into an issue along the way I thought I post my notes here for others who may attempt the same.<br />
<br />
First was the question, which of the libs needed to be supplied for a client to use a remote Katta cluster. Please note that I am referring here to a "canonical" setup with a distributed Lucene index (which I created on <a href="http://hadoop.apache.org/common/">Hadoop</a> from data in <a href="http://hadoop.apache.org/hbase/">HBase</a> using a <a href="http://www.larsgeorge.com/2009/05/hbase-mapreduce-101-part-i.html">MapReduce</a> job). I found these libs needed to be added, the rest is for the server:<br />
<br />
<pre class="brush:plain; gutter: false;">katta-core-0.6.rc1.jar
lucene-core-3.0.0.jar
zookeeper-3.2.2.jar
zkclient-0.1-dev.jar
hadoop-core-0.20.1.jar
log4j-1.2.15.jar
commons-logging-1.0.4.jar</pre><br />
Here is the code for the client, please note that this is a simple test app that expects to get the name of the index, the default Lucene search field and query on the command line. I did not add usage info as this is just a proof of concept.<br />
<br />
<pre class="brush:java">package com.worldlingo.test;
import net.sf.katta.lib.lucene.Hit;
import net.sf.katta.lib.lucene.Hits;
import net.sf.katta.lib.lucene.LuceneClient;
import net.sf.katta.util.ZkConfiguration;
import org.apache.hadoop.io.MapWritable;
import org.apache.hadoop.io.Writable;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.search.Query;
import org.apache.lucene.util.Version;
import java.util.Arrays;
import java.util.Map;
public class KattaLuceneClient {
public static void main(String[] args) {
try {
Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_CURRENT);
Query query = new QueryParser(Version.LUCENE_CURRENT, args[1], analyzer).parse(args[2]);
// assumes "/katta.zk.properties" available on classpath!
ZkConfiguration conf = new ZkConfiguration();
LuceneClient luceneClient = new LuceneClient(conf);
Hits hits = luceneClient.search(query, Arrays.asList(args[0]).toArray(new String[1]), 99);
int num = 0;
for (Hit hit : hits.getHits()) {
MapWritable mw = luceneClient.getDetails(hit);
for (Map.Entry<Writable, Writable> entry : mw.entrySet()) {
System.out.println("[" + (num++) + "] key -> " + entry.getKey() + ", value -> " + entry.getValue());
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
}</pre><br />
The first part is standard Lucene code were we parse the query string with an analyzer. The seconds part is Katta related as it creates a configuration object, which assumes we have a <a href="http://hadoop.apache.org/zookeeper/">ZooKeeper</a> configuration in the class path. That config only needs to have these lines set:<br />
<br />
<pre class="brush:plain; gutter: false;">zookeeper.embedded=false
zookeeper.servers=server-1:2181,server-2:2181</pre><br />
The first line is really only used on the server, so it can be left out on the client. I simply copied the server <code>katta.zk.properties</code> to match my setup. The important line is the second one, which tells the client where the ZooKeeper responsible for managing the Katta cluster is running. With this info the client is able to distribute the search calls to the correct Katta slaves.<br />
<br />
Further along we create a <code>LuceneClient</code> instance and start the actual search. Here I simply used no sorting and set the maximum number of hits returned to 99. These two values could be optionally added to the command line parameters but are trivial and not required here - this is a minimal test client after all ;)<br />
<br />
The last part of the app is simply printing out the fields and their values of each found document. Please note that Katta is using the low-level <a href="http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/io/Writable.html"><code>Writable</code></a> class as part of its response. This is not "too" intuitive for the uninitiated. These are actually <a href="http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/io/Text.html"><code>Text</code></a> instances so they can safely be convert to text using ".toString()".<br />
<br />
Finally, I also checked the test project into my <a href="http://github.com/larsgeorge/katta-lucene-client">GitHub</a> account for your perusal. Have fun!Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-860423771829255614.post-2866325416572504072010-05-01T10:01:00.000-07:002010-05-01T10:01:06.314-07:003rd Munich OpenHUG MeetingI am pleased to invite you to our third Munich Open Hadoop User Group Meeting! <br />
<br />
Like always we are looking forward to see everyone again and are welcoming new attendees to join our group. We are enthusiast about all things related to scalable, distributed storage system. We are not limiting us to a particular system but appreciate anyone who would like to share about their experiences.<br />
<br />
When: Thursday May 6th, 2010 at 6pm (open end)<br />
Where: eCircle AG, Nymphenburger Straße 86, 80636 München ["Bruckmann" Building, "U1 Mailinger Str", <a href="http://www.ecircle.com/de/kontakt/anfahrt.html">map</a> (in German) and look for the signs]<br />
<br />
Thanks again to Bob Schulze from eCircle for providing the infrastructure.<br />
<br />
We have a talk scheduled by Stefan Seelmann who is a member of the project committee for the Apache Directory project. This is followed by an open discussion.<br />
<br />
Please RSVP at <a href="https://www.xing.com/events/3rd-munich-openhug-meeting-506082">Xing</a> and Yahoo's <a href="http://upcoming.yahoo.com/event/5771044/BY/Mnchen/3rd-Munich-OpenHUG-Meeting/eCircle-AG">Upcoming</a>.<br />
<br />
Looking forward to seeing you there!<br />
<br />
Cheers,<br />
LarsUnknownnoreply@blogger.com0tag:blogger.com,1999:blog-860423771829255614.post-27660925104912066482010-02-12T16:01:00.000-08:002010-02-12T16:01:16.186-08:00FOSDEM 2010 NoSQL TalkLet me take a minute to wrap up my <a href="http://fosdem.org/2010/">FOSDEM 2010</a> experience. I was part of the <a href="http://fosdem.org/2010/schedule/devrooms/nosql">NoSQL DevRoom</a> organized by <a href="http://twitter.com/stevenn">@stevenn</a> from <a href="http://outerthought.org/">Outerthought</a>, who I had the pleasure to visit before a few months back as an <a href="http://www.larsgeorge.com/2009/05/european-hbase-ambassador.html">HBase Ambassador</a>. <br />
<br />
First things first, the NoSQL DevRoom was just an absolute success and I had a blast attending it. I also made sure to not walk around and see other talks outside the NoSQL track while there were many and plenty good ones. I did so deliberately to see the other projects and what they have to offer. I thought it was great, a good vibe was felt throughout the whole day as the audience got a whirlwind tour through the NoSQL landscape. The room was full to the brim for most presentations and some folks had to miss out as we could not have had more enter. This did prove the great interest in this fairly new kind of technology. Exciting!<br />
<br />
The focus of my talk was about the history I have with HBase starting with it in late 2007. At this point I would like to take to the opportunity to thank Michael Stack, the lead of HBase, as he has helped me many times back then to sort out the problems I ran into. I would also like to say that if you start with HBase today you will not have these problems as HBase has matured tremendously since then. It is an overall stable product and can solve scalability issues you may face with regular RDBMS's today - and that with great ease.<br />
<br />
So the talk I gave did not really sell all the features nor did it explain everything fully. I felt this could be left to the reader to look up on the project's website (or here on my blog) and hence I focused on my use case only. First up, here are the slides. <br />
<br />
<object data="http://viewer.docstoc.com/" height="450" id="_ds_24778119" name="_ds_24778119" type="application/x-shockwave-flash" width="500"> <param name="FlashVars" value="doc_id=24778119&mem_id=602922&doc_type=pdf&fullscreen=0&showrelated=0&showotherdocs=0&showstats=0 "/><param name="movie" value="http://viewer.docstoc.com/" /><param name="allowScriptAccess" value="always" /><param name="allowFullScreen" value="true" /></object> <br />
<span style="font-size: xx-small;"><a href="http://www.docstoc.com/docs/24778119/My%20Life%20with%20HBase%20-%20FOSDEM%202010%20NoSQL"> My Life with HBase - FOSDEM 2010 NoSQL</a> - </span> <br />
<br />
After my talk and throughout the rest of the day I also had great conversations with the attendees who had many and great questions.<br />
<br />
Having listened to the other talks though I felt I probably could have done a better job selling HBase to the audience. I could have reported about use-cases in well known companies, gave better performance numbers and so on. I have learned a lesson and am making sure I will be doing a better job next time around. I guess this is also another facet of what this is about, i.e. learning to achieve a higher level of professionalism.<br />
<br />
But as I said above, my intend was to report about <u>my</u> life with HBase. I am grateful though that it was accepted as that and please let me cite Todd Hoff (see <a href="http://highscalability.com/blog/2010/2/12/hot-scalability-links-for-february-12-2010.html">Hot Scalability Links for February 12, 2010</a>) who put it in such nice words: <br />
<blockquote>"The hardscabble tale of HBase's growth from infancy to maturity. A very good introduction and overview of HBase."</blockquote>Thank you!<br />
<br />
Finally here is the video of the talk:<br />
<br />
<object height="443" width="474"><param name="movie" value="http://www.parleys.com/share/parleysshare2.swf?pageId=1859"></param><param name="allowFullScreen" value="true"></param><param name="pageId" value="1859"></param><embed src="http://www.parleys.com/share/parleysshare2.swf?pageId=1859" type="application/x-shockwave-flash" allowfullscreen="true" width="474" height="443"></embed></object><br />
<br />
I am looking forward to more NoSQL events in Europe in the near future and will attempt to represent HBase once more (including those adjustments I mentioned above). My hope is that we as Europeans are able to adopt these new technologies and stay abreast with the rest of the world. We sure have smart people to do so.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-860423771829255614.post-47807957654787622142010-02-05T12:20:00.000-08:002010-02-05T13:07:00.145-08:00IvyDE and HBase 0.21 (trunk)If you are staying on top of HBase development and frequently update to the HBase trunk (0.21.0-dev at the time of this post) you may have noticed that we now have support for <a href="http://ant.apache.org/ivy/">Apache Ivy</a> (see <a href="http://issues.apache.org/jira/browse/HBASE-1433">HBASE-1433</a>). This is good because it allows to better control dependencies of the required jar files. It does have a few drawbacks though. One issue that you must be online to get your initial set of jars. You can also set up a local mirror or reuse the one you need for Hadoop anyways to share some of them.<br />
<br />
Another issue is that it pulls in many more libs as part of the dependency resolving process. This reminds me bit of <code>aptitude</code> and when you try to install Java, for example on Debian. It often wants to pull in a litany of "required" packages but upon closer look many are only recommended and need not to be installed.<br />
<br />
Finally you need to get the jar files somehow into your development environment. I am using Eclipse 3.5 on my Windows 7 PC as well as on my MacOS machines. If you have not run <code>ant</code> from the command line yet you have no jars downloaded and opening the project in Eclipse yields an endless amount of errors. You have two choices, you can run <code>ant</code> and get all jars and then add them to the project in Eclipse. But that is rather static and does not work well with future changes. It also is not the "ivy" way to resolve the libraries automatically.<br />
<br />
<a href="http://2.bp.blogspot.com/_Cib_A77V54U/S2xMXVWKaxI/AAAAAAAAAGA/11gv-69kUD4/s1600-h/ivy-edit-libs.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="200" src="http://2.bp.blogspot.com/_Cib_A77V54U/S2xMXVWKaxI/AAAAAAAAAGA/11gv-69kUD4/s200/ivy-edit-libs.png" width="145" /></a>The other option you have is adding a plugin to Eclipse that can handle Ivy for you, right within the IDE. Luckily for Eclipse there is <a href="http://ant.apache.org/ivy/ivyde/">IvyDE</a>. You install it according to its <a href="http://ant.apache.org/ivy/ivyde/download.cgi">documentation</a> and then add a "Classpath Container" as described <a href="http://ant.apache.org/ivy/ivyde/history/latest-milestone/cp_container.html">here</a>. That part works quite well and after a restart IvyDE is ready to go.<br />
<br />
A few more steps have to be done to get HBase working now - as in compiling without errors. The crucial one is editing the Ivy library and setting the HBase specific Ivy files. In particular the "Ivy settings path" and the properties file. The latter especially is specifying all the various version numbers that the ivy.xml is using. Without it the Ivy resolve process will fail with many errors all over the place. Please note that in the screen shot I added you see how it looks like on my Windows PC. The paths will be slightly different for your setup and probably even using another format if you are on a Mac or Linux machine. As long as you specify both you should be fine.<br />
<br />
<a href="http://4.bp.blogspot.com/_Cib_A77V54U/S2xMUo_zXbI/AAAAAAAAAF4/UKgdgVW-2Lg/s1600-h/ivy-build-path.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="136" src="http://4.bp.blogspot.com/_Cib_A77V54U/S2xMUo_zXbI/AAAAAAAAAF4/UKgdgVW-2Lg/s200/ivy-build-path.png" width="200" /></a>The other important issue is that you have to repeat that same step adding the Classpath Container two more times: each of the two larger contrib packages "contrib/stargate" and "contrib/transactional" have their own ivy.xml! For both you have to go into the respective directory and right click on the ivy.xml and follow the steps described in the Ivy documentation. Enter the same information as mentioned above to make the resolve work, leave everything else the way it is. You may notice that the contrib packages have a few more targets unticked. That is OK and can be used as-is.<br />
<br />
As a temporary step you have to add two more static libraries that are in the <code>$HBASE_HOME/lib</code> directory: <code>libthrift-0.2.0.jar</code> and <code>zookeeper-3.2.2.jar</code>. Those will eventually be published on the Ivy repositories and then this step is obsolete (see <a href="http://issues.apache.org/jira/browse/INFRA-2461">INFRA-2461</a>).<br />
<br />
Eventually you end up with three containers as shown in the second and third screen shot. The Eclipse toolbar now also has an Ivy "Resolve All Dependencies" button which you can use to trigger the download process. Personally I had to do this a few times as the mirrors with the jars seem to be flaky at times. I ended up with for example "hadoop-mapred.jar" missing. Another resolve run fixed the problem.<br />
<br />
<a href="http://4.bp.blogspot.com/_Cib_A77V54U/S2x4bMfpQMI/AAAAAAAAAGQ/-7XucgUd8kU/s1600-h/ivy-eclipse.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="125" src="http://4.bp.blogspot.com/_Cib_A77V54U/S2x4bMfpQMI/AAAAAAAAAGQ/-7XucgUd8kU/s200/ivy-eclipse.png" width="200" /></a>The last screen shot shows the three Ivy related containers once more in the tree view of the Package Explorer in the Java perspective. What you also see is the Ivy console, which also is installed with the plugin. You have to open it as usual using the "Window - Show View - Console" menu (if you do not have the Console View open already) and then use the drop down menu next to the "Open Console" button in that view to open the Ivy console. It gives you access to all the details when resolving the dependencies and can hint when you have done something wrong. Please note though that it also lists a lot of connection errors, one for every mirror or repository that does not respond or yet has the required package available. One of them should respond though or as mentioned above you will have to try later again.<br />
<br />
Eclipse automatically compiles the project and if everything worked out it does so now without a hitch. Good luck!<br />
<br />
<b>Update:</b> Added info about the yet still static thrift and zookeeper jars. See Kay Kay's comment below.Unknownnoreply@blogger.com4tag:blogger.com,1999:blog-860423771829255614.post-67024836801442688952010-01-30T15:45:00.000-08:002010-01-30T16:11:16.085-08:00HBase Architecture 101 - Write-ahead-LogWhat is the Write-ahead-Log you ask? In my previous <a href="http://www.larsgeorge.com/2009/10/hbase-architecture-101-storage.html">post</a> we had a look at the general storage architecture of HBase. One thing that was mentioned is the Write-ahead-Log, or WAL. This post explains how the log works in detail, but bear in mind that it describes the current version, which is 0.20.3. I will address the various plans to improve the log for 0.21 at the end of this article. For the term itself please read <a href="http://en.wikipedia.org/wiki/Write-ahead_logging">here</a>.<br />
<br />
<u>Big Picture</u><br />
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/_Cib_A77V54U/S2M98DazIVI/AAAAAAAAAFw/cmp0W38kWGY/s1600-h/wal-flow.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="230" src="http://1.bp.blogspot.com/_Cib_A77V54U/S2M98DazIVI/AAAAAAAAAFw/cmp0W38kWGY/s400/wal-flow.png" width="400" /></a></div>The WAL is the lifeline that is needed when disaster strikes. Similar to a BIN log in MySQL it records all changes to the data. This is important in case something happens to the primary storage. So if the server crashes it can effectively replay that log to get everything up to where the server should have been just before the crash. It also means that if writing the record to the WAL fails the whole operation must be considered a failure.<br />
<br />
Let"s look at the high level view of how this is done in HBase. First the client initiates an action that modifies data. This is currently a call to <code>put(Put)</code>, <code>delete(Delete)</code> and <code>incrementColumnValue()</code> (abbreviated as "incr" here at times). Each of these modifications is wrapped into a <a href="http://hadoop.apache.org/hbase/docs/r0.20.2/api/org/apache/hadoop/hbase/KeyValue.html">KeyValue</a> object instance and sent over the wire using RPC calls. The calls are (ideally batched) to the <a href="http://hadoop.apache.org/hbase/docs/r0.20.2/api/org/apache/hadoop/hbase/regionserver/HRegionServer.html">HRegionServer</a> that serves the affected regions. Once it arrives the payload, the said KeyValue, is routed to the <a href="http://hadoop.apache.org/hbase/docs/r0.20.2/api/org/apache/hadoop/hbase/regionserver/HRegion.html">HRegion</a> that is responsible for the affected row. The data is written to the WAL and then put into the <a href="http://hadoop.apache.org/hbase/docs/r0.20.2/api/org/apache/hadoop/hbase/regionserver/MemStore.html">MemStore</a> of the actual <a href="http://hadoop.apache.org/hbase/docs/r0.20.2/api/org/apache/hadoop/hbase/regionserver/Store.html">Store</a> that holds the record. And that also pretty much describes the write-path of HBase.<br />
<br />
Eventually when the MemStore gets to a certain size or after a specific time the data is asynchronously persisted to the file system. In between that timeframe data is stored volatile in memory. And if the HRegionServer hosting that memory crashes the data is lost... but for the existence of what is the topic of this post, the WAL!<br />
<br />
We have a look now at the various classes or "wheels" working the magic of the WAL. First up is one of the main classes of this contraption.<br />
<br />
<u>HLog</u><br />
<br />
The class which implements the WAL is called <a href="http://hadoop.apache.org/hbase/docs/r0.20.2/api/org/apache/hadoop/hbase/regionserver/HLog.html">HLog</a>. What you may have read in my previous post and is also illustrated above is that there is only one instance of the HLog class, which is one per HRegionServer. When a HRegion is instantiated the single HLog is passed on as a parameter to the constructor of HRegion.<br />
<br />
Central part to HLog's functionality is the <code>append()</code> method, which internally eventually calls <code>doWrite()</code>. It is what is called when the above mentioned modification methods are invoked... or is it? One thing to note here is that for performance reasons there is an option for <code>put()</code>, <code>delete()</code>, and <code>incrementColumnValue()</code> to be called with an extra parameter set: <code>setWriteToWAL(boolean)</code>. If you invoke this method while setting up for example a <code>Put()</code> instance then the writing to WAL is forfeited! That is also why the downward arrow in the big picture above is done with a dashed line to indicate the optional step. By default you certainly want the WAL, no doubt about that. But say you run a large bulk import MapReduce job that you can rerun at any time. You gain extra performance but need to take extra care that no data was lost during the import. The choice is yours.<br />
<br />
Another important feature of the HLog is keeping track of the changes. This is done by using a "sequence number". It uses an <a href="http://java.sun.com/javase/6/docs/api/java/util/concurrent/atomic/AtomicLong.html">AtomicLong</a> internally to be thread-safe and is either starting out at zero - or at that last known number persisted to the file system. So as the region is opening its storage file, it reads the highest sequence number which is stored as a meta field in each <a href="http://hadoop.apache.org/hbase/docs/r0.20.2/api/org/apache/hadoop/hbase/io/hfile/HFile.html">HFile</a> and sets the HLog sequence number to that value if it is higher than what has been recorded before. So at the end of opening all storage files the HLog is initialized to reflect where persisting has ended and where to continue. You will see in a minute where this is used.<br />
<br />
<a href="http://2.bp.blogspot.com/_Cib_A77V54U/S2IVD_TiqsI/AAAAAAAAAFo/dXO_-nZWz1c/s1600-h/wal.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="243" src="http://2.bp.blogspot.com/_Cib_A77V54U/S2IVD_TiqsI/AAAAAAAAAFo/dXO_-nZWz1c/s400/wal.png" width="400" /></a>The image to the right shows three different regions. Each of them covering a different row key range. As mentioned above each of these regions shares the the same single instance of HLog. What that means in this context is that the data as it arrives at each region it is written to the WAL in an unpredictable order. We will address this further below.<br />
<br />
Finally the HLog has the facilities to recover and split a log left by a crashed HRegionServer. These are invoked by the <a href="http://hadoop.apache.org/hbase/docs/r0.20.2/api/org/apache/hadoop/hbase/master/HMaster.html">HMaster</a> before regions are deployed again.<br />
<br />
<u>HLogKey</u><br />
<br />
Currently the WAL is using a Hadoop <a href="http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/io/SequenceFile.html">SequenceFile</a>, which stores record as sets of key/values. For the WAL the value is simply the KeyValue sent from the client. The key is represented by an <a href="http://hadoop.apache.org/hbase/docs/r0.20.2/api/org/apache/hadoop/hbase/regionserver/HLogKey.html">HLogKey</a> instance. If you may recall from my first <a href="http://www.larsgeorge.com/2009/10/hbase-architecture-101-storage.html">post</a> in this series the KeyValue does only represent the row, column family, qualifier, timestamp, and value as well as the "Key Type". Last time I did not address that field since there was no context. Now we have one because the Key Type is what identifies what the KeyValue represents, a "put" or a "delete" (where there are a few more variations of the latter to express what is to be deleted, value, column family or a specific column).<br />
<br />
What we are missing though is where the KeyValue belongs to, i.e. the region and the table name. That is stored in the HLogKey. What is also stored is the above sequence number. With each record that number is incremented to be able to keep a sequential order of edits. Finally it records the "Write Time", a time stamp to record when the edit was written to the log.<br />
<br />
<u>LogFlusher</u><br />
<br />
As mentioned above as data arrives at a HRegionServer in form of KeyValue instances it is written (optionally) to the WAL. And as mentioned as well it is then written to a SequenceFile. While this seems trivial, it is not. One of the base classes in Java IO is the Stream. Especially streams writing to a file system are often buffered to improve performance as the OS is much faster writing data in batches, or blocks. If you write records separately IO throughput would be really bad. But in the context of the WAL this is causing a gap where data is supposedly written to disk but in reality it is in limbo. To mitigate the issue the underlaying stream needs to be flushed on a regular basis. This functionality is provided by the <a href="http://hadoop.apache.org/hbase/docs/r0.20.2/api/org/apache/hadoop/hbase/regionserver/LogFlusher.html">LogFlusher</a> class and thread. It simply calls <code>HLog.optionalSync()</code>, which checks if the <code>hbase.regionserver.optionallogflushinterval</code>, set to 10 seconds by default, has been exceeded and if that is the case invokes <code>HLog.sync()</code>. The other place invoking the sync method is <code>HLog.doWrite()</code>. Once it has written the current edit to the stream it checks if the <code>hbase.regionserver.flushlogentries</code> parameter, set to 100 by default, has been exceeded and calls sync as well.<br />
<br />
Sync itself invokes <code>HLog.Writer.sync()</code> and is implemented in <code>SequenceFileLogWriter</code>. For now we assume it flushes the stream to disk and all is well. That in reality this is all a bit more complicated is discussed below.<br />
<br />
<u>LogRoller</u><br />
<br />
Obviously it makes sense to have some size restrictions related to the logs written. Also we want to make sure a log is persisted on a regular basis. This is done by the LogRoller class and thread. It is controlled by the <code>hbase.regionserver.logroll.period</code> parameter in the <code>$HBASE_HOME/conf/hbase-site.xml</code> file. By default this is set to 1 hour. So every 60 minutes the log is closed and a new one started. Over time we are gathering that way a bunch of log files that need to be maintained as well. The <code>HLog.rollWriter()</code> method, which is called by the LogRoller to do the above rolling of the current log file, is taking care of that as well by calling <code>HLog.cleanOldLogs()</code> subsequently. It checks what the highest sequence number written to a storage file is, because up to that number all edits are persisted. It then checks if there is a log left that has edits all less than that number. If that is the case it deletes said logs and leaves just those that are still needed.<br />
<br />
<div style="background-color: #eeeeee; border: 1px solid #000000; padding: 5px;">This is a good place to talk about the following obscure message you may see in your logs:<br />
<br />
<code>2009-12-15 01:45:48,427 INFO org.apache.hadoop.hbase.regionserver.HLog: Too<br />
many hlogs: logs=130, maxlogs=96; forcing flush of region with oldest edits:<br />
foobar,1b2dc5f3b5d4,1260083783909</code><br />
<br />
It is printed because the configured maximum number of log files to keep exceeds the number of log files that are required to be kept because they still contain outstanding edits that have not yet been persisted. The main reason I saw this being the case is when you stress out the file system so much that it cannot keep up persisting the data at the rate new data is added. Otherwise log flushes should take care of this. Note though that when this message is printed the server goes into a special mode trying to force flushing out edits to reduce the number of logs required to be kept.</div><br />
The other parameters controlling the log rolling are <code>hbase.regionserver.hlog.blocksize</code> and <code>hbase.regionserver.logroll.multiplier</code>, which are set by default to rotate logs when they are at 95% of the blocksize of the SequenceFile, typically 64M. So either the logs are considered full or when a certain amount of time has passed causes the logs to be switched out, whatever comes first. <br />
<br />
<u>Replay</u><br />
<br />
Once a HRegionServer starts and is opening the regions it hosts it checks if there are some left over log files and applies those all the way down in <code>Store.doReconstructionLog()</code>. Replaying a log is simply done by reading the log and adding the contained edits to the current MemStore. At the end an explicit flush of the MemStore (note, this is not the flush of the log!) helps writing those changes out to disk.<br />
<br />
The old logs usually come from a previous region server crash. When the HMaster is started or detects that region server has crashed it splits the log files belonging to that server into separate files and stores those in the region directories on the file system they belong to. After that the above mechanism takes care of replaying the logs. One thing to note is that regions from a crashed server can only be redeployed if the logs have been split and copied. Splitting itself is done in <code>HLog.splitLog()</code>. The old log is read into memory in the main thread (means single threaded) and then using a pool of threads written to all region directories, one thread for each region. <br />
<br />
<u>Issues</u><br />
<br />
As mentioned above all edits are written to one HLog per HRegionServer. You would ask why that is the case? Why not write all edits for a specific region into its own log file? Let's quote the <a href="http://labs.google.com/papers/bigtable.html">BigTable</a> paper once more:<br />
<br />
<blockquote style="border: 1px solid #000000; margin: 1em 20px; padding: 5px;">If we kept the commit log for each tablet in a separate log file, a very large number of files would be written concurrently in GFS. Depending on the underlying file system implementation on each GFS server, these writes could cause a large number of disk seeks to write to the different physical log files.</blockquote><br />
HBase followed that principle for pretty much the same reasons. As explained above you end up with many files since logs are rolled and kept until they are safe to be deleted. If you do this for every region separately this would not scale well - or at least be an itch that sooner or later is causing pain.<br />
<br />
So far that seems to be no issue. But again, it causes problems when things go wrong. As long as you have applied all edits in time and persisted the data safely, all is well. But if you have to split the log because of a server crash then you need to divide into suitable pieces, as described above in the "replay" paragraph. But as you have seen above as well all edits are intermingled in the log and there is no index of what is stored at all. For that reason the HMaster cannot redeploy any region from a crashed server until it has split the logs for that very server. And that can be quite a number if the server was behind applying the edits. <br />
<br />
Another problem is data safety. You want to be able to rely on the system to save all your data, no matter what newfangled algorithms are employed behind the scenes. As far as HBase and the log is concerned you can turn down the log flush times to as low as you want - you are still dependent on the underlaying file system as mentioned above; the stream used to store the data is flushed but is it written to disk yet? We are talking about <a href="http://en.wikipedia.org/wiki/Sync_(Unix)">fsync</a> style issues. Now for HBase we are most likely talking Hadoop's HDFS as being the file system that is persisted to.<br />
<br />
Up to this point it should be abundantly clear that the log is what keeps data safe. For that reason a log could be kept open for up to an hour (or more if configured so). As data arrives a new key/value pair is written to the SequenceFile and occasionally flushed to disk. But that is not how Hadoop was set out to work. It was meant to provide an API that allows to open a file, write data into it (preferably a lot) and closed right away, leaving an immutable file for everyone else to read many times. Only after a file is closed it is visible and readable to others. If a process dies while writing the data the file is pretty much considered lost. What is required is a feature that allows to read the log up to the point where the crashed server has written it (or as close as possible). <br />
<br />
<div style="background-color: #eeeeee; border: 1px solid #000000; padding: 5px;"><b>Interlude:</b> <u>HDFS append, hflush, hsync, sync... wth?</u><br />
<br />
It all started with <a href="http://issues.apache.org/jira/browse/HADOOP-1700">HADOOP-1700</a> reported by HBase lead Michael Stack. It was committed in Hadoop 0.19.0 and meant to solve the problem. But that was not the case. So the issue was tackled again in HADOOP-4379 aka <a href="http://issues.apache.org/jira/browse/HDFS-200">HDFS-200</a> and implemented <code>syncFs()</code> that was meant to help syncing changes to a file to be more reliable. For a while we had custom code (see <a href="http://issues.apache.org/jira/browse/HBASE-1470">HBASE-1470</a>) that detected a patched Hadoop that exposed that API. But again this did not solve the issue entirely. <br />
<br />
Then came <a href="http://issues.apache.org/jira/browse/HDFS-265">HDFS-265</a>, which revisits the append idea in general. It also introduces a <code>Syncable</code> interface that exposes <code>hsync()</code> and <code>hflush()</code>. <br />
<br />
Lastly <code>SequenceFile.Writer.sync()</code> is <u>not</u> the same as the above, it simply writes a synchronization marker into the file that helps reading it later - or recover data if broken.</div><br />
While append for HDFS in general is useful it is not used in HBase, but the <code>hflush()</code> is. What it does is writing out everything to disk as the log is written. In case of a server crash we can safely read that "dirty" file up to the last edits. The append in Hadoop 0.19.0 was so badly suited that a <code>hadoop fsck /</code> would report the DFS being corrupt because of the open log files HBase kept.<br />
<br />
Bottom line is, without Hadoop 0.21.0 you can very well face data loss. With Hadoop 0.21.0 you have a state-of-the-art system.<br />
<br />
<u>Planned Improvements</u><br />
<br />
For HBase 0.21.0 there are quite a few things lined up that affect the WAL architecture. Here are some of the noteworthy ones.<br />
<br />
<u>SequenceFile Replacement</u><br />
<br />
One of the central building blocks around the WAL is the actual storage file format. The used SequenceFile has quite a few shortcomings that need to be addressed. One for example is the suboptimal performance as all writing in SequenceFile is synchronized, as documented in <a href="http://issues.apache.org/jira/browse/HBASE-2105">HBASE-2105</a>.<br />
<br />
As with HFile replacing MapFile in HBase 0.20.0 it makes sense to think about a complete replacement. A first step was done to make the HBase classes independent of the underlaying file format. <a href="http://issues.apache.org/jira/browse/HBASE-2059">HBASE-2059</a> made the class implementing the log configurable.<br />
<br />
Another idea is to change to a different serialization altogether. <a href="http://issues.apache.org/jira/browse/HBASE-2055">HBASE-2055</a> proposes such a format using Hadoop's <a href="http://hadoop.apache.org/avro/">Avro</a> as the low level system. Avro is also slated to be the new RPC format for Hadoop, which does help as more people are familiar with it.<br />
<br />
<u>Append/Sync</u><br />
<br />
Even with <code>hflush()</code> we have a problem that calling it too often may cause the system to slow down. Previous tests using the older <code>syncFs()</code> call did show that calling it for every record slows down the system considerably. One step to help is to implement a "Group Commit", done in <a href="http://issues.apache.org/jira/browse/HBASE-1939">HBASE-1939</a>. It flushes out records in batches. In addition <a href="http://issues.apache.org/jira/browse/HBASE-1944">HBASE-1944</a> adds the notion of a "deferred log flush" as a parameter of a Column Family. If set to <code>true</code> it leaves the syncing of changes to the log to the newly added LogSyncer class and thread. Finally <a href="http://issues.apache.org/jira/browse/HBASE-2041">HBASE-2041</a> sets the <code>flushlogentries</code> to 1 and <code>optionallogflushinterval</code> to 1000 msecs. The <code>.META.</code> is always synced for every change, user tables can be configured as needed. <br />
<br />
<u>Distributed Log Splitting</u><br />
<br />
As remarked splitting the log is an issue when regions need to be redeployed. One idea is to keep a list of regions with edits in Zookeeper. That way at least all "clean" regions can be deployed instantly. Only those with edits need to wait then until the logs are split. <br />
<br />
What is left is to improve how the logs are split to make the process faster. Here is how is the BigTable addresses the issue:<br />
<blockquote style="border: 1px solid #000000; margin: 1em 20px; padding: 5px;">One approach would be for each new tablet server to read this full commit log file and apply just the entries needed for the tablets it needs to recover. However, under such a scheme, if 100 machines were each assigned a single tablet from a failed tablet server, then the log file would be read 100 times (once by each server).</blockquote>and further <br />
<blockquote style="border: 1px solid #000000; margin: 1em 20px; padding: 5px;">We avoid duplicating log reads by first sorting the commit log entries in order of the keys (table, row name, log sequence number). In the sorted output, all mutations for a particular tablet are contiguous and can therefore be read efficiently with one disk seek followed by a sequential read. To parallelize the sorting, we partition the log file into 64 MB segments, and sort each segment in parallel on different tablet servers. This sorting process is coordinated by the master and is initiated when a tablet server indicates that it needs to recover mutations from some commit log file.</blockquote>This is where its at. As part of the HMaster rewrite (see <a href="http://issues.apache.org/jira/browse/HBASE-1816">HBASE-1816</a>) the log splitting will be addressed as well. <a href="http://issues.apache.org/jira/browse/HBASE-1364">HBASE-1364</a> wraps the splitting of logs into one issue. But I am sure that will evolve in more sub tasks as the details get discussed.Unknownnoreply@blogger.com7tag:blogger.com,1999:blog-860423771829255614.post-53229598319087728732010-01-28T04:47:00.000-08:002010-01-28T04:52:49.107-08:002nd Munich OpenHUG MeetingAt the end of last year we had a first meeting in Munich to allow everyone interested in all things Hadoop and related technologies to gather, get to know each other and exchange their knowledge, experiences and information. We would like to invite for the second Munich OpenHUG Meeting and hope to see you all again and meet even more new enthusiasts there. We would also be thrilled if those attending would be willing to share their story so that we can learn about other projects and how people are using the exciting NoSQL related technologies. No pressure though, come along and simply listen if you prefer, we welcome anyone and everybody!<br />
<br />
When: Thursday February 25, 2010 at 5:30pm open end <br />
Where: eCircle AG, Nymphenburger Straße 86, 80636 München ["Bruckmann" Building, "U1 Mailinger Str", <a href="http://www.ecircle.com/de/kontakt/anfahrt.html">map</a> (in German) and look for the signs]<br />
<br />
Thanks again to Bob Schulze from eCircle to provide the location and projector. So far we have a talk scheduled by Christoph Rupp about <a href="http://hamsterdb.com/">HamsterDB</a>. We are still looking for volunteers who would like to present on any related topic (please <a href="mailto:info@larsgeorge.com">contact me</a>)! Otherwise we will have an open discussion about whatever is brought up by the attendees.<br />
<br />
Last but not least there will be something to drink and we will get pizzas in. Since we do not know how many of you will come we simply stay at the events location and continue our chats over food. <br />
<br />
Looking forward to seeing you there!<br />
<br />
Please RSVP at <a href="http://upcoming.yahoo.com/event/5279322/BY/Mnchen/2nd-Munich-OpenHUG-Meeting/eCircle-AG">Upcoming</a> or <a href="https://www.xing.com/events/2nd-munich-openhug-meeting-458852">Xing</a>.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-860423771829255614.post-51761427051601107352010-01-10T11:52:00.000-08:002010-01-10T11:56:20.024-08:00First Munich OpenHUG Meeting - SummaryOn December 17th we had the first Munich OpenHUG meeting. The location was kindly provided by eCircle's Bob Schulze. This was the first in a series of meetings in Munich based on everything Hadoop and related technologies. Let's use the term NoSQL with care as this is not about blame or finger pointing (I feel that it was used like that in the past and therefore making this express point). We are trying to get together the brightest local and remote talent to report and present on new age and evolved existing technologies. <br />
<br />
The first talk of the night was given by Bob himself presenting his findings of evaluating HBase and Hadoop for their internal use. He went into detail explaining how HBase is structuring its data and how it can be used for their needs. One thing that I noted in particular was his <a href="http://sourceforge.net/projects/hbaseexplorer/">HBase Explorer</a>, which he subsequently published on SourceForge as an Open Source project. The talk was concluded by an open discussion about HBase.<br />
<br />
The second part of the meeting was my own <a href="http://www.docstoc.com/docs/21748013/HBase-at-WorldLingo---Munich-OpenHUG">presentation</a> about how we at WorldLingo use HBase and Hadoop (as well as Lucene etc.)<br />
<br />
We continued our discussion on HBase with the developers of eCircle present. This was very interesting and fruitful and we had the chance to exchange experiences made along our similar paths. <br />
<br />
I would have wished for the overall attendance to be a little higher, but it was a great start. Talking to other hosts of similar events it seems that this is normal and therefore my hopes are up for the next meetings throughout this year. We have planned the next meeting for <b>February 25th, 2010</b> at the same location. If you have interest in presenting a talk on any related topic, please <a href="mailto:info@larsgeorge.com">contact</a> me!<br />
<br />
I am looking forward to meeting you all there!<br />
<br />
LarsUnknownnoreply@blogger.com0tag:blogger.com,1999:blog-860423771829255614.post-30071659151016753722009-12-14T01:22:00.000-08:002009-12-14T01:24:56.731-08:00First Munich OpenHUG MeetingFirst Munich OpenHUG Meeting<br />
<br />
We are trying to gauge the interest in a south Germany Hadoop User Group Meeting. After seeing quite a big interest in the Berlin meetings a few of us got together and decided to test the waters for another meeting at the other end of the country. We are therefore happy to announce the first Munich OpenHUG Meeting.<br />
<br />
When: Thursday December 17, 2009 at 5:30pm open end<br />
Where: eCircle AG, Nymphenburger Straße 86, 80636 München ("Bruckmann" Building, "U1 Mailinger Str", <a href="http://www.ecircle.com/de/kontakt/anfahrt.html">map</a> in German and look for the signs)<br />
<br />
Thanks to Bob Schulze from eCircle to provide the location, projector and also giving a first presentation on how eCircle is planning to use the Hadoop stack.<br />
<br />
We also have Dave Butlerdi giving an overview of his usage of Hadoop. <br />
Finally I will give a state of affairs of the HBase project. What is it, what does it do and how am I using it (since early 2008).<br />
<br />
We are also open for everyone who wants to talk about anything related to these new technologies often combined under the rather new term "NoSQL". Take the opportunity to talk about what you are working on and find like minded people to bounce ideas off. This is also why we chose the title OpenHUG for the meeting. While we mostly work with Hadoop and its subprojects we also like to learn about related projects and technologies.<br />
<br />
Last but not least there will be something to drink and we will get pizzas in. Since we do not know how many of you will come on such short notice we simply stay at Bob's place and continue or chats over food.<br />
<br />
As this is a first meeting in Munich on this topic we called it in a day after the Berlin meeting. Given there is interest we will in the future settle on dates that fit nicely between the Berlin dates so that we have no overlap and you can attend both meetings.<br />
<br />
Please RSVP at <a href="http://upcoming.yahoo.com/event/4897497/BY/Mnchen/First-Munich-OpenHUG/eCircle-AG">Yahoo's Upcoming</a> or <a href="http://www.xing.com/events/munich-openhug-437166">Xing</a>.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-860423771829255614.post-63968669554774062612009-11-24T06:08:00.000-08:002010-10-26T03:29:13.708-07:00HBase vs. BigTable ComparisonHBase is an open-source implementation of the Google <a href="http://labs.google.com/papers/bigtable.html">BigTable</a> architecture. That part is fairly easy to understand and grasp. What I personally feel is a bit more difficult is to understand how much HBase covers and where there are differences (still) compared to the BigTable specification. This post is an attempt to compare the two systems.<br />
<br />
Before we embark onto the <strike>dark</strike> technology side of things I would like to point out one thing upfront: HBase is very close to what the BigTable paper describes. Putting aside minor differences, as of <a href="http://hadoop.apache.org/hbase/releases.html">HBase 0.20</a>, which is using <a href="http://hadoop.apache.org/zookeeper/">ZooKeeper</a> as its <strike>lock</strike> distributed coordination service, it has all the means to be nearly an exact implementation of BigTable's functionality. What I will be looking into below are mainly subtle variations or differences. Where possible I will try to point out how the HBase team is working on improving the situation given there is a need to do so. <br />
<br />
<h4>Scope</h4>The comparison in this post is based on the OSDI'06 paper that describes the system Google implemented in about seven person-years and which is in operation since 2005. The paper was published 2006 while the HBase sub-project of <a href="http://hadoop.apache.org/">Hadoop</a> was established only around the end of that same year to early 2007. Back then the current version of Hadoop was 0.15.0. Given we are now about 2 years in, with Hadoop 0.20.1 and HBase 0.20.2 available, you can hopefully understand that indeed much has happened since. Please also note that I am comparing a 14 page high level technical paper with an open-source project that can be examined freely from top to bottom. It usually means that there is more to tell about how HBase does things because the information is available. <br />
<br />
Towards the end I will also address a few newer features that BigTable has nowadays and how HBase is comparing to those. We start though with naming conventions.<br />
<br />
<h4>Terminology</h4>There are a few different terms used in either system describing the same thing. The most prominent being what HBase calls "regions" while Google refers to it as "tablet". These are the partitions of subsequent rows spread across many "region servers" - or "tablet server" respectively. Apart from that most differences are minor or caused by usage of related technologies since Google's code is obviously closed-source and therefore only mirrored by open-source projects. The open-source projects are free to use other terms and most importantly names for the projects themselves.<br />
<br />
<h4>Features</h4>The following table lists various "features" of BigTable and compares them with what HBase has to offer. Some are actual implementation details, some are configurable option and so on. This may be confusing but it would be difficult to sort them into categories and not ending up with one entry only in each of them. <br />
<br />
<table border="1" bordercolor="#000000" cellpadding="8" cellspacing="0" style="page-break-before: always;" width="95%"><tbody>
<tr valign="top"> <td bgcolor="#000000" nowrap><span style="color: white;"><b>Feature</b></span></td> <td bgcolor="#000000" nowrap><div align="center"><span style="color: white;"><b> Google BigTable </b></span></div></td> <td bgcolor="#000000" nowrap><div align="center"><span style="color: white;"><b> Apache HBase </b></span></div></td> <td bgcolor="#000000"><span style="color: white;"><b>Notes</b></span></td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Atomic Read/Write/Modify</td> <td style="border: solid black 1.0pt;"><div align="center">Yes, per row</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">Yes, per row</div></td> <td style="border: solid black 1.0pt;">Since BigTable does not strive to be a relational database it does not have transactions. The closest to such a mechanism is the atomic access to each row in the table. HBase also implements a row lock API which allows the user to lock more than one row at a time.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Lexicographic Row Order</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt;">All rows are sorted lexicographically in one order and that one order only. Again, this is no SQL database where you can have different sorting orders.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Block Support</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt;">Within each storage file data is written as smaller blocks of data. This enables faster loading of data from large storage files. The size is configurable in either system. The typical size is 64K.</td>
<tr valign="top"> <td style="border: solid black 1.0pt;">Block Compression</td> <td style="border: solid black 1.0pt;"><div align="center">Yes, per column family</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">Yes, per column family</div></td> <td style="border: solid black 1.0pt;">Google uses BMDiff and Zippy in a two step process. BMDiff works really well because neighboring key-value pairs in the store files are often very similar. This can be achieved by using versioning so that all modifications to a value are stored next to each other but still have a lot in common. Or by designing the row keys in such a way that for example web pages from the same site are all bundled. Zippy then is a modified LZW algorithm. HBase on the other hand uses the standard Java supplied GZip or with a little <a href="http://wiki.apache.org/hadoop/UsingLzoCompression">effort</a> the GPL licensed LZO format. There are indications though that Hadoop also may want to have BMDiff (<a href="http://issues.apache.org/jira/browse/HADOOP-5793">HADOOP-5793</a>) and possibly Zippy as well.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Number of Column Families</td> <td style="border: solid black 1.0pt;"><div align="center">Hundreds at Most</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">Less than 100</div></td> <td style="border: solid black 1.0pt;">While the number of rows and columns is theoretically unbound the number of column families is not. This is a design trade-off but does not impose too much restrictions if the tables and key are designed accordingly.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Column Family Name Format</td> <td style="border: solid black 1.0pt;"><div align="center">Printable</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">Printable</div></td> <td style="border: solid black 1.0pt;">The main reason for HBase here is that column family names are used as directories in the file system.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Qualifier Format</td> <td style="border: solid black 1.0pt;"><div align="center">Arbitrary</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">Arbitrary</div></td> <td style="border: solid black 1.0pt;">Any arbitrary byte[] array can be used.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Key/Value Format</td> <td style="border: solid black 1.0pt;"><div align="center">Arbitrary</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">Arbitrary</div></td> <td style="border: solid black 1.0pt;">Like above, any arbitrary byte[] array can be used.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Access Control</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #ffff00"><div align="center">No</div></td> <td style="border: solid black 1.0pt;">BigTable enforces access control on a column family level. HBase does not have yet have that feature (see <a href="http://issues.apache.org/jira/browse/hbase-1697">HBASE-1697</a>).</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Cell Versions</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt;">Versioning is done using timestamps. See next feature below too. The number of versions that should be kept are freely configurable on a column family level.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Custom Timestamps</td> <td style="border: solid black 1.0pt;"><div align="center">Yes (micro)</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">Yes (milli)</div></td> <td style="border: solid black 1.0pt;">With both systems you can either set the timestamp of a value that is stored yourself or leave the default "now". There are "known" restrictions in HBase that the outcome is indeterminate when adding older timestamps after already having stored newer ones beforehand.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Data Time-To-Live</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt;">Besides having versions of data cells the user can also set a time-to-live on the stored data that allows to discard data after a specific amount of time.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Batch Writes</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt;">Both systems allow to batch table operations.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Value based Counters</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt;">BigTable and HBase can use a specific column as atomic counters. HBase does this by acquiring a row lock before the value is incremented.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Row Filters</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt;">Again both system allow to apply filters when scanning rows.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Client Script Execution</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #ffff00"><div align="center">No</div></td> <td style="border: solid black 1.0pt;">BigTable uses <a href="http://research.google.com/archive/sawzall.html">Sawzall</a> to enable users to process the stored data.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">MapReduce Support</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt;">Both systems have convenience classes that allow scanning a table in MapReduce jobs.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Storage Systems</td> <td style="border: solid black 1.0pt;"><div align="center">GFS</div></td> <td style="border: solid black 1.0pt;"><div align="center">HDFS, S3, S3N, EBS</div></td> <td style="border: solid black 1.0pt;">While BigTable works on Google's GFS, HBase has the option to use any file system as long as there is a proxy or driver class for it.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">File Format</td> <td style="border: solid black 1.0pt;"><div align="center">SSTable</div></td> <td style="border: solid black 1.0pt;"><div align="center">HFile</div></td> <td style="border: solid black 1.0pt;"> </td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Block Index</td> <td style="border: solid black 1.0pt;"><div align="center">At end of file</div></td> <td style="border: solid black 1.0pt;"><div align="center">At end of file</div></td> <td style="border: solid black 1.0pt;">Both storage file formats have a similar block oriented structure with the block index stored at the end of the file.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Memory Mapping</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #ffff00"><div align="center">No</div></td> <td style="border: solid black 1.0pt;">BigTable can memory map storage files directly into memory.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Lock Service</td> <td style="border: solid black 1.0pt;"><div align="center">Chubby</div></td> <td style="border: solid black 1.0pt;"><div align="center">ZooKeeper</div></td> <td style="border: solid black 1.0pt;">There is a difference in where ZooKeeper is used to coordinate tasks in HBase as opposed to provide locking services. Overall though ZooKeeper does for HBase pretty much what Chubby does for BigTable with slightly different semantics.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Single Master</td> <td style="border: solid black 1.0pt; background-color: #ff0000"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">No</div></td> <td style="border: solid black 1.0pt;">HBase recently added support for multiple masters. These are on "hot" standby and monitor the master's ZooKeeper node.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Tablet/Region Count</td> <td style="border: solid black 1.0pt;"><div align="center">10-1000</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">10-1000</div></td> <td style="border: solid black 1.0pt;">Both systems recommend about the same amount of regions per region server. Of course this depends on many things but given a similar setup as far as "commodity" machines are concerned it seems to result in the same amount of load on each server.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Tablet/Region Size</td> <td style="border: solid black 1.0pt;"><div align="center">100-200MB</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">256MB</div></td> <td style="border: solid black 1.0pt;">The maximum region size can be configured for HBase and BigTable. HBase used 256MB as the default value.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Root Location</td> <td style="border: solid black 1.0pt;"><div align="center">1st META / Chubby</div></td> <td style="border: solid black 1.0pt;"><div align="center">-ROOT- / ZooKeeper</div></td> <td style="border: solid black 1.0pt;">HBase handles the Root table slightly different from BigTable, where it is the first region in the Meta table. HBase uses its own table with a single region to store the Root table. Once either system starts the address of the server hosting the Root region is stored in ZooKeeper or Chubby so that the clients can resolve its location without hitting the master. </td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Client Region Cache</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt;">The clients in either system caches the location of regions and has appropriate mechanisms to detect stale information and update the local cache respectively</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Meta Prefetch</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #ffff00"><div align="center">No (?)</div></td> <td style="border: solid black 1.0pt;">A design feature of BigTable is to fetch more than one Meta region information. This proactively fills the client cache for future lookups.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Historian</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt;">The history of region related events (such as splits, assignment, reassignment) is recorded in the Meta table.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Locality Groups</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #ff0000"><div align="center">No</div></td> <td style="border: solid black 1.0pt;">It is not entirely clear but it seems everything in BigTable is defined by Locality Groups. The group multiple column families into one so that they get stored together and also share the same configuration parameters. A single column family is probably a Locality Group with one member. HBase does not have this option and handles each column family separately.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">In-Memory Column Families</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt;">These are for relatively small tables that need very fast access times.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">KeyValue (Cell) Cache</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #ffff00"><div align="center">No</div></td> <td style="border: solid black 1.0pt;">This is a cache that servers hot cells.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Block Cache</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt;">Blocks read from the storage files are cached internally in configurable caches.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Bloom Filters</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt;">These filters allow - at a cost of using memory on the region server - to quickly check if a specific cell exists or maybe not.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Write-Ahead Log (WAL)</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt;">Each region server in either system stores one modification log for all regions it hosts.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Secondary Log</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #ff0000"><div align="center">No</div></td> <td style="border: solid black 1.0pt;">In addition to the Write-Ahead log mentioned above BigTable has a second log that it can use when the first is going slow. This is a performance optimization.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Skip Write-Ahead Log</td> <td style="border: solid black 1.0pt;"><div align="center">?</div></td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt;">For bulk imports the client in HBase can opt to skip writing into the WAL.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Fast Table/Region Split</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #00ff00"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt;">Splitting a region or tablet is fast as the daughter regions first read the original storage file until a compaction finally rewrites the data into the region's local store.</td> </tr>
</tbody></table><br />
<h4>New Features</h4>As mentioned above, a few years have passed since the original OSDI'06 BigTable paper. Jeff Dean - a fellow at Google - has mentioned a few new BigTable <a href="http://www.scribd.com/doc/21244790/Google-Designs-Lessons-and-Advice-from-Building-Large-Distributed-Systems">features</a> during speeches and presentations he gave recently. We will have a look at some of them here.<br />
<br />
<table border="1" bordercolor="#000000" cellpadding="8" cellspacing="0" style="page-break-before: always;" width="95%"><tbody>
<tr valign="top"> <td bgcolor="#000000" nowrap><span style="color: white;"><b>Feature</b></span></td> <td bgcolor="#000000" nowrap><div align="center"><span style="color: white;"><b> Google BigTable </b></span></div></td> <td bgcolor="#000000" nowrap><div align="center"><span style="color: white;"><b> Apache HBase </b></span></div></td> <td bgcolor="#000000"><span style="color: white;"><b>Notes</b></span></td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Client Isolation</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #ffff00"><div align="center">No</div></td> <td style="border: solid black 1.0pt;">BigTable is internally used to server many separate clients and can therefore keep the data between isolated.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Coprocessors</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #ffff00"><div align="center">No</div></td> <td style="border: solid black 1.0pt;">BigTable can host code that resides with the regions and splits with them as well. See <a href="http://issues.apache.org/jira/browse/HBASE-2000">HBASE-2000</a> for progress on this feature within HBase.</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Corruption Safety</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt;"><div align="center">No</div></td> <td style="border: solid black 1.0pt;">This is an interesting topic. BigTable uses CRC checksums to verify if data has been written safely. While HBase does not have this, the question is if that is build into Hadoop's HDFS?</td> </tr>
<tr valign="top"> <td style="border: solid black 1.0pt;">Replication</td> <td style="border: solid black 1.0pt;"><div align="center">Yes</div></td> <td style="border: solid black 1.0pt; background-color: #ffff00"><div align="center">No</div></td> <td style="border: solid black 1.0pt;">HBase is working on the same topic in <a href="http://issues.apache.org/jira/browse/HBASE-1295">HBASE-1295</a></td> </tr>
</tbody></table><br />
Note: the color codes indicate what features have a direct match or where it is missing (yet). Weaker features are colored yellow, as I am not sure if they are immediately necessary or even applicable given HBase's implementation.<br />
<br />
<h4>Variations and Differences</h4>Some of the above features need a bit more looking into as they are difficult to be narrowed down to simple "Yay or Nay" questions. I am addressing them below separately.<br />
<br />
<u>Lock Service</u><br />
<br />
This is from the BigTable paper:<br />
<blockquote style="margin:1em 20px; border: 1px solid #000000; padding: 5px;">Bigtable uses Chubby for a variety of tasks: to ensure that there is at most one active master at any time; to store the bootstrap location of Bigtable data (see Section 5.1); to discover tablet servers and finalize tablet server deaths (see Section 5.2); to store Bigtable schema information (the column family information for each table); and to store access control lists. If Chubby becomes unavailable for an extended period of time, Bigtable becomes unavailable.<br />
</blockquote><br />
There is a lot of overlap compared to how HBase does use ZooKeeper. What is different though is that schema information is not stored in ZooKeeper (yet, see http://wiki.apache.org/hadoop/ZooKeeper/HBaseUseCases) for details. What is important here though is the same reliance on the lock service being available. From my own experience and reading the threads on the HBase mailing list it is often underestimated what can happen when ZooKeeper does not get the resources it needs to react timely. It is better to have a small ZooKeeper cluster on older machines not doing anything else as opposed to having ZooKeeper nodes running next to the already heavy Hadoop or HBase processes. Once you starve ZooKeeper you will see a domino effect of HBase nodes going down with it - including the master(s).<br />
<br />
<b>Update:</b> After talking to a few guys of the ZooKeeper team I would like to point out that this is indeed <u>not</u> a ZooKeeper issue. It has to do with the fact that if you have an already heavily loaded node trying to also respond in time to ZooKeeper resources then you may face a timeout situation where the HBase RegionServers and even the Master may think that their coordination service is gone and shut themselves down. Patrick Hunt has responded to this by <a href="http://www.mail-archive.com/hbase-user@hadoop.apache.org/msg07290.html">mail</a> and by <a href="http://wiki.apache.org/hadoop/ZooKeeper/ServiceLatencyOverview">post</a>. Please read both to see that ZooKeeper is able to handle the load. I personally recommend to set up ZooKeeper in combination with HBase on a separate cluster, maybe a set of spare machines you have from a recent update to the cluster and which are slightly outdated (no 2xquad core CPU with 16GB of memory) but are otherwise perfectly fine. This also allows you to monitor the machines separately and not having to see a combined CPU load of 100% on the servers and not really knowing where it comes from and what effect it may have. <br />
<br />
Another important difference is that ZooKeeper is no lock service like Chubby - and I do think it does not have to be as far as HBase is concerned. ZooKeeper is a distributed coordination service enabling HBase to do Master node elections etc. It also allows using semaphores to indicate state or actions required. So where Chubby creates a lock file to indicate a tablet server is up and running HBase in turn uses ephemeral nodes that exist as long as the session between the RegionServer which creates that node and ZooKeeper is active. This also causes the differences in semantics where in BigTable can delete a tablet servers lock file to indicate that it has lost its lease on tablets. In HBase this has to be handled differently because of the slightly less restrictive architecture of ZooKeeper. These are only semantics as mentioned and do not mean one is better than the other - just different.<br />
<br />
<blockquote style="margin:1em 20px; border: 1px solid #000000; padding: 5px;">The first level is a file stored in Chubby that contains the location of the root tablet. The root tablet contains the location of all tablets in a special METADATA table. Each METADATA tablet contains the location of a set of user tablets. The root tablet is just the first tablet in the METADATA table, but is treated specially - it is never split - to ensure that the tablet location hierarchy has no more than three levels.<br />
</blockquote><br />
As mentioned above in HBase the root region is its own table with a single region. If that makes a difference to having it as the first (non-splittable) region of the meta table I doubt strongly. It is just the same feature but implemented differently.<br />
<br />
<blockquote style="margin:1em 20px; border: 1px solid #000000; padding: 5px;">The METADATA table stores the location of a tablet under a row key that is an encoding of the tablet's table identifier and its end row. <br />
</blockquote><br />
HBase does have a different layout here. It stores the start and end row with each region where the end row is exclusive and denotes the first (or start) row of the next region. Again, these are minor differences and I am not sure if there is a better or worse solution. It is just done differently.<br />
<br />
<u>Master Operation</u><br />
<br />
<blockquote style="margin:1em 20px; border: 1px solid #000000; padding: 5px;">To detect when a tablet server is no longer serving its tablets, the master periodically asks each tablet server for the status of its lock. If a tablet server reports that it has lost its lock, or if the master was unable to reach a server during its last several attempts, the master attempts to acquire an exclusive lock on the server's file. If the master is able to acquire the lock, then Chubby is live and the tablet server is either dead or having trouble reaching Chubby, so the master ensures that the tablet server can never serve again by deleting its server file. Once a server's file has been deleted, the master can move all the tablets that were previously assigned to that server into the set of unassigned tablets. To ensure that a Bigtable cluster is not vulnerable to networking issues between the master and Chubby, the master kills itself if its Chubby session expires.<br />
</blockquote><br />
This is quite different even up to the current HBase 0.20.2. Here the master uses a heartbeat protocol that is used by the region servers to report for duty and that they are still alive subsequently. I am not sure if this topic is covered by the master rewrite umbrella issue <a href="http://issues.apache.org/jira/browse/HBASE-1816">HBASE-1816</a> - and if it needs to be addressed at all. It could well be that what we have in HBase now is sufficient and does its job just fine. It was created when there was no lock service yet and therefore could be considered legacy code too.<br />
<br />
<u>Master Startup</u><br />
<br />
<blockquote style="margin:1em 20px; border: 1px solid #000000; padding: 5px;">The master executes the following steps at startup. (1) The master grabs a unique master lock in Chubby, which prevents concurrent master instantiations. (2) The master scans the servers directory in Chubby to find the live servers. (3) The master communicates with every live tablet server to discover what tablets are already assigned to each server. (4) The master scans the METADATA table to learn the set of tablets. Whenever this scan encounters a tablet that is not already assigned, the master adds the tablet to the set of unassigned tablets, which makes the tablet eligible for tablet assignment.<br />
</blockquote><br />
Along what I mentioned above, this part of the code was created before ZooKeeper was available. So HBase actually waits for the region servers to report for duty. It also scans the .META. table to learn what is there and which server is assigned to it. ZooKeeper is (yet) only used to publish the server hosting the -ROOT- region. <br />
<br />
<u>Tablet/Region Splits</u><br />
<br />
<blockquote style="margin:1em 20px; border: 1px solid #000000; padding: 5px;">In case the split notification is lost (either because the tablet server or the master died), the master detects the new tablet when it asks a tablet server to load the tablet that has now split. The tablet server will notify the master of the split, because the tablet entry it finds in the METADATA table will specify only a portion of the tablet that the master asked it to load.<br />
</blockquote><br />
The master node in HBase uses the .META. solely to detect when a region was split but the message was lost. For that reason it scans the .META. on a regular basis to see when a region appears that is not yet assigned. It will then assign that region as per its default strategy.<br />
<br />
<u>Compactions</u><br />
<br />
The following are more terminology differences than anything else.<br />
<br />
<blockquote style="margin:1em 20px; border: 1px solid #000000; padding: 5px;">As write operations execute, the size of the memtable increases. When the memtable size reaches a threshold, the memtable is frozen, a new memtable is created, and the frozen memtable is converted to an SSTable and written to GFS. This minor compaction process has two goals: it shrinks the memory usage of the tablet server, and it reduces the amount of data that has to be read from the commit log during recovery if this server dies. Incoming read and write operations can continue while compactions occur.<br />
</blockquote><br />
HBase has a similar operation but it is referred to as a "flush". Opposed to that "minor compactions" in HBase rewrite the last N used store files, i.e. those with the most recent mutations as they are probably much smaller than previously created files that have more data in them.<br />
<br />
<blockquote style="margin:1em 20px; border: 1px solid #000000; padding: 5px;">... we bound the number of such files by periodically executing a merging compaction in the background. A merging compaction reads the contents of a few SSTables and the memtable, and writes out a new SSTable. The input SSTables and memtable can be discarded as soon as the compaction has finished. <br />
</blockquote><br />
This again refers to what is called "minor compaction" in HBase.<br />
<br />
<blockquote style="margin:1em 20px; border: 1px solid #000000; padding: 5px;">A merging compaction that rewrites all SSTables into exactly one SSTable is called a major compaction.<br />
</blockquote><br />
Here we have an exact match though, a "major compaction" in HBase also rewrites all files into one.<br />
<br />
<u>Immutable Files</u><br />
<br />
Knowing that files are fixed once written BigTable makes the following assumption:<br />
<br />
<blockquote style="margin:1em 20px; border: 1px solid #000000; padding: 5px;">The only mutable data structure that is accessed by both reads and writes is the memtable. To reduce contention during reads of the memtable, we make each memtable row copy-on-write and allow reads and writes to proceed in parallel.<br />
</blockquote><br />
I do believe this is done similar in HBase but am not sure. It certainly has the same architecture as HDFS files for example are also immutable once written.<br />
<br />
I can only recommend that you read the BigTable too and make up your own mind. This post was inspired by the idea to learn what BigTable really has to offer and how much HBase has already covered. The difficult part is of course that there is not too much information available on BigTable. But the numbers even the 2006 paper lists are more than impressive. If HBase as on open-source project with just a handful of committers of whom most have a full-time day jobs can achieve something even remotely comparable I think this is a huge success. And looking at the 0.21 and 0.22 road map, the already small gap is going to shrink even further!Unknownnoreply@blogger.com11tag:blogger.com,1999:blog-860423771829255614.post-54118603284271465282009-11-20T06:05:00.000-08:002009-11-20T06:25:27.674-08:00HBase on Cloudera Training Virtual Machine (0.3.2)Note: This is a follow up to my earlier <a href="http://www.larsgeorge.com/2009/10/hbase-on-cloudera-training-virtual.html">post</a>. Since then Cloudera released a new VM that includes the current 0.20 branch of Hadoop. Below I have the same post adjusted to work with that new release. Please note that there are subtle changes, for example the NameNode port has changed. So if you in any way still have the older post please make sure you forget about it and follow this one here instead for the new VM version.<br />
<br />
You might want to run HBase on Cloudera's <a href="http://www.cloudera.com/hadoop-training-virtual-machine">Virtual Machine</a> to get a quick start to a prototyping setup. In theory you download the VM, start it and you are ready to go. The main issue though is that the current Hadoop Training VM does not include HBase at all (yet?). Apart from that the install of a local HBase instance is a straight forward process. <br />
<br />
Here are the steps to get HBase running on Cloudera's VM:<br />
<ol><li>Download VM <br />
<br />
Get it from Cloudera's <a href="http://www.cloudera.com/hadoop-training-virtual-machine">website</a>.<br />
<br />
</li>
<li>Start VM<br />
<br />
As the above page states: "To launch the VMWare image, you will either need VMware Player for windows and linux, or VMware Fusion for Mac."<br />
<br />
Note: I have Parallels for Mac and wanted to use that. I used Parallels Transporter to convert the "cloudera-training-0.3.2.vmx" to a new "cloudera-training-0.2-cl4-000001.hdd", create a new VM in Parallels selecting Ubuntu Linux as the OS and the newly created .hdd as the disk image. Boot up the VM and you are up and running. I gave it a bit more memory for the graphics to be able to switch the VM to 1440x900 which is the native screen resolution on my MacBook Pro I am using.<br />
<br />
Finally follow the steps explained on the page above, i.e. open a Terminal and issue:<br />
<pre class="brush:plain; gutter: false;">$ cd ~/git
$ ./update-exercises --workspace
</pre><br />
</li>
<li>Pull HBase branch<br />
<br />
We are using the brand new HBase 0.20.2 release. Open a new Terminal (or issue a <code>$ cd ..</code> in the open one), then:<br />
<pre class="brush:plain; gutter: false;">$ sudo -u hadoop git clone http://git.apache.org/hbase.git /home/hadoop/hbase
$ sudo -u hadoop sh -c "cd /home/hadoop/hbase ; git checkout origin/tags/0.20.2"
Note: moving to "origin/tags/0.20.2" which isn't a local branch
If you want to create a new branch from this checkout, you may do so
(now or later) by using -b with the checkout command again. Example:
git checkout -b <new_branch_name>
HEAD is now at 777fb63... HBase release 0.20.2
</pre><br />
First we clone the repository, then switch to the actual branch. You will notice that I am using <code>sudo -u hadoop</code> because Hadoop itself is started under that account and so I wanted it to match. Also, the default "training" account does not have SSH set up as explained in Hadoop's <a href="http://hadoop.apache.org/common/docs/current/quickstart.html">quick-start</a> guide. When <code>sudo</code> is asking for a password use the default, which is set to "training". <br />
<br />
You can ignore the messages git prints out while performing the checkout.<br />
<br />
</li>
<li>Build Branch<br />
<br />
Continue in Terminal:<br />
<pre class="brush:plain; gutter: false;">$ sudo -u hadoop sh -c "cd /home/hadoop/hbase/ ; export PATH=$PATH:/usr/share/apache-ant-1.7.1/bin ; ant package"
...
BUILD SUCCESSFUL
</pre><br />
</li>
<li>Configure HBase<br />
<br />
There are a few edits to be made to get HBase running. <br />
<pre class="brush:plain; gutter: false;">$ sudo -u hadoop vim /home/hadoop/hbase/build/conf/hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:8022/hbase</value>
</property>
</configuration>
$ sudo -u hadoop vim /home/hadoop/hbase/build/conf/hbase-env.sh
# The java implementation to use. Java 1.6 required.
# export JAVA_HOME=/usr/java/jdk1.6.0/
export JAVA_HOME=/usr/lib/jvm/java-6-sun
...
</pre><br />
</li>
<li>Rev up the Engine!<br />
<br />
The final thing is to start HBase:<br />
<pre class="brush:plain; gutter: false;">$ sudo -u hadoop /home/hadoop/hbase/build/bin/start-hbase.sh
$ sudo -u hadoop /home/hadoop/hbase/build/bin/hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Version: 0.20.2, r777fb63ff0c73369abc4d799388a45b8bda9e5fd, Thu Nov 19 15:32:17 PST 2009
hbase(main):001:0>
</pre><br />
Done!<br />
<br />
Let's create a table and check if it was created OK.<br />
<pre class="brush:plain; gutter: false;">hbase(main):001:0> list
0 row(s) in 0.0910 seconds
hbase(main):002:0> create 't1', 'f1', 'f2', 'f3'
0 row(s) in 6.1260 seconds
hbase(main):003:0> list
t1
1 row(s) in 0.0470 seconds
hbase(main):004:0> describe 't1'
DESCRIPTION ENABLED
{NAME => 't1', FAMILIES => [{NAME => 'f1', COMPRESSION => 'NONE', VERS true
IONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => '
false', BLOCKCACHE => 'true'}, {NAME => 'f2', COMPRESSION => 'NONE', V
ERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY =
> 'false', BLOCKCACHE => 'true'}, {NAME => 'f3', COMPRESSION => 'NONE'
, VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMOR
Y => 'false', BLOCKCACHE => 'true'}]}
1 row(s) in 0.0750 seconds
hbase(main):005:0>
</pre></li>
</ol>This sums it up. I hope you give HBase on the Cloudera Training VM a whirl as it also has Eclipse installed and therefore provides a quick start into Hadoop and HBase. <br />
<br />
Just keep in mind that this is for prototyping only! With such a setup you will only be able to insert a handful of rows. If you overdo it you will bring it to its knees very quickly. But you can safely use it to play around with the shell to create tables or use the API to get used to it and test changes in your code etc.<br />
<br />
Finally a screenshot of the running HBase UI:<br />
<div class="separator" style="clear: both; text-align: left;"><a href="http://2.bp.blogspot.com/_Cib_A77V54U/SwaVeKlhZyI/AAAAAAAAAEw/0G60iZ0axIk/s1600/hbase-cloudera.png" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://2.bp.blogspot.com/_Cib_A77V54U/SwaVeKlhZyI/AAAAAAAAAEw/0G60iZ0axIk/s320/hbase-cloudera.png" /></a><br />
</div>Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-860423771829255614.post-38564811563771833522009-10-20T10:22:00.000-07:002009-11-20T04:57:27.991-08:00HBase on Cloudera Training Virtual Machine (0.3.1)You might want to run HBase on Cloudera's <a href="http://www.cloudera.com/hadoop-training-virtual-machine">Virtual Machine</a> to get a quick start to a prototyping setup. In theory you download the VM, start it and you are ready to go. There are a few issues though, the worst being that the current Hadoop Training VM does not include HBase at all. Also, Cloudera is using a specific version of Hadoop that it deems stable and maintains it own release cycle. So Cloudera's version of Hadoop is 0.18.3. HBase though needs Hadoop 0.20 - but we are in luck as Andrew Purtell of TrendMicro maintains a <a href="http://svn.apache.org/repos/asf/hadoop/hbase/branches/0.20_on_hadoop-0.18.3/">special branch</a> of HBase 0.20 that works with Cloudera's release. <br />
<br />
Here are the steps to get HBase running on Cloudera's VM:<br />
<ol><li>Download VM <br />
<br />
Get it from Cloudera's <a href="http://www.cloudera.com/hadoop-training-virtual-machine">website</a>.</li>
<li>Start VM<br />
<br />
As the above page states: "To launch the VMWare image, you will either need VMware Player for windows and linux, or VMware Fusion for Mac."<br />
<br />
Note: I have Parallels for Mac and wanted to use that. I used Parallels Transporter to convert the "cloudera-training-0.3.1.vmx" to a new "cloudera-training-0.2-cl3-000002.hdd", create a new VM in Parallels selecting Ubuntu Linux as the OS and the newly created .hdd as the disk image. Boot up the VM and you are up and running. I gave it a bit more memory for the graphics to be able to switch the VM to 1440x900 which is native to my MacBook Pro I am using.<br />
<br />
Finally follow the steps explained on the page above, i.e. open a Terminal and issue:<br />
<pre class="brush:plain; gutter: false;">$ cd ~/git
$ ./update-exercises --workspace
</pre></li>
<li>Pull HBase branch<br />
<br />
Open a new Terminal (or issue a <code>$ cd ..</code> in the open one), then:<br />
<pre class="brush:plain; gutter: false;">$ sudo -u hadoop git clone http://git.apache.org/hbase.git /home/hadoop/hbase
$ sudo -u hadoop sh -c "cd /home/hadoop/hbase ; git checkout origin/0.20_on_hadoop-0.18.3"
...
HEAD is now at c050f68... pull up to release
</pre><br />
First we clone the repository, then switch to the actual branch. You will notice that I am using <code>sudo -u hadoop</code> because Hadoop itself is started under that account and so I wanted it to match. Also, the default "training" account does not have SSH set up as explained in Hadoop's <a href="http://hadoop.apache.org/common/docs/current/quickstart.html">quick-start</a> guide. When <code>sudo</code> is asking for a password use the default set to "training". <br />
</li>
<li>Build Branch<br />
<br />
Continue in Terminal:<br />
<pre class="brush:plain; gutter: false;">$ sudo -u hadoop sh -c "cd /home/hadoop/hbase/ ; export PATH=$PATH:/usr/share/apache-ant-1.7.1/bin ; ant package"
...
BUILD SUCCESSFUL
</pre></li>
<li>Configure HBase<br />
<br />
There are a few edits to be made to get HBase running. <br />
<pre class="brush:plain; gutter: false;">$ sudo -u hadoop vim /home/hadoop/hbase/build/conf/hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:8020/hbase</value>
</property>
</configuration>
$ sudo -u hadoop vim /home/hadoop/hbase/build/conf/hbase-env.sh
# The java implementation to use. Java 1.6 required.
# export JAVA_HOME=/usr/java/jdk1.6.0/
export JAVA_HOME=/usr/lib/jvm/java-6-sun
...
</pre><br />
Note: There is a small glitch in the revision 826669 of that Cloudera specific HBase branch. The master UI (on port 60010 on localhost) will not start because a path is different and Jetty packages are missing because of it. You can fix it by editing the start up script and changing the path scanned: <br />
<pre class="brush:plain; gutter: false;">$ sudo -u hadoop vim /home/hadoop/hbase/build/bin/hbase
</pre><br />
Replace<br />
<code>for f in $HBASE_HOME/lib/jsp-2.1/*.jar; do</code><br />
with<br />
<code>for f in $HBASE_HOME/lib/jetty-ext/*.jar; do</code><br />
<br />
This is only until the developers have fixed this in the branch (compare the revision I used r813052 with what you get). Or if you do not want the UI you can ignore this and the error in the logs too. HBase will still run, just not its web based interface. <br />
</li>
<li>Rev up the Engine!<br />
<br />
The final thing is to start HBase:<br />
<pre class="brush:plain; gutter: false;">$ sudo -u hadoop /home/hadoop/hbase/build/bin/start-hbase.sh
$ sudo -u hadoop /home/hadoop/hbase/build/bin/hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Version: 0.20.0-0.18.3, r813052, Mon Oct 19 06:51:57 PDT 2009
hbase(main):001:0> list
0 row(s) in 0.2320 seconds
hbase(main):002:0>
</pre><br />
Done!<br />
</li>
</ol><br />
This sums it up. I hope you give HBase on the Cloudera Training VM a whirl as it also has Eclipse installed and therefore provides a quick start into Hadoop and HBase. <br />
<br />
Just keep in mind that this is for prototyping only! With such a setup you will only be able to insert a handful of rows. If you overdo it you will bring it to its knees very quickly. But you can safely use it to play around with the shell to create tables or use the API to get used to it and test changes in your code etc.<br />
<br />
<b>Update:</b> Updated title to include version number, fixed XMLUnknownnoreply@blogger.com3tag:blogger.com,1999:blog-860423771829255614.post-80732286519209174832009-10-12T14:35:00.000-07:002009-10-19T08:12:58.732-07:00HBase Architecture 101 - StorageOne of the more hidden aspects of <a href="http://hadoop.apache.org/hbase/">HBase</a> is how data is actually stored. While the majority of users may never have to bother about it you may have to get up to speed when you want to learn what the various advanced configuration options you have at your disposal mean. "How can I tune HBase to my needs?", and other similar questions are certainly interesting once you get over the (at times steep) learning curve of setting up a basic system. Another reason wanting to know more is if for whatever reason disaster strikes and you have to recover a HBase installation. <br /><br />In my own efforts getting to know the respective classes that handle the various files I started to sketch a picture in my head illustrating the storage architecture of HBase. But while the ingenious and blessed committers of HBase easily navigate back and forth through that maze I find it much more difficult to keep a coherent image. So I decided to put that sketch to paper. Here it is. <br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_Cib_A77V54U/StorLZRjHSI/AAAAAAAAAEI/4IznGhslNxw/s1600-h/hbase-files.png"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 400px; height: 202px;" src="http://1.bp.blogspot.com/_Cib_A77V54U/StorLZRjHSI/AAAAAAAAAEI/4IznGhslNxw/s400/hbase-files.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5393670978492636450" /></a> Please note that this is not a UML or call graph but a merged picture of classes and the files they handle and by no means complete though focuses on the topic of this post. I will discuss the details below and also look at the configuration options and how they affect the low-level storage files.<br /><br /><u>The Big Picture</u><br /><br />So what does my sketch of the HBase innards really say? You can see that HBase handles basically two kinds of file types. One is used for the write-ahead log and the other for the actual data storage. The files are primarily handled by the <code>HRegionServer</code>'s. But in certain scenarios even the <code>HMaster</code> will have to perform low-level file operations. You may also notice that the actual files are in fact divided up into smaller blocks when stored within the Hadoop Distributed Filesystem (HDFS). This is also one of the areas where you can configure the system to handle larger or smaller data better. More on that later.<br /><br />The general flow is that a new client contacts the Zookeeper quorum (a separate cluster of Zookeeper nodes) first to find a particular row key. It does so by retrieving the server name (i.e. host name) that hosts the -ROOT- region from Zookeeper. With that information it can query that server to get the server that hosts the .META. table. Both of these two details are cached and only looked up once. Lastly it can query the .META. server and retrieve the server that has the row the client is looking for.<br /><br />Once it has been told where the row resides, i.e. in what region, it caches this information as well and contacts the <code>HRegionServer</code> hosting that region directly. So over time the client has a pretty complete picture of where to get rows from without needing to query the .META. server again. <br /><br />Note: The <code>HMaster</code> is responsible to assign the regions to each <code>HRegionServer</code> when you start HBase. This also includes the "special" -ROOT- and .META. tables.<br /><br />Next the <code>HRegionServer</code> opens the region it creates a corresponding <code>HRegion</code> object. When the <code>HRegion</code> is "opened" it sets up a <code>Store</code> instance for each <code>HColumnFamily</code> for every table as defined by the user beforehand. Each of the <code>Store</code> instances can in turn have one or more <code>StoreFile</code> instances, which are lightweight wrappers around the actual storage file called <code>HFile</code>. A <code>HRegion</code> also has a <code>MemStore</code> and a <code>HLog</code> instance. We will now have a look at how they work together but also where there are exceptions to the rule. <br /><br /><u>Stay Put</u><br /><br />So how is data written to the actual storage? The client issues a <code>HTable.put(Put)</code> request to the <code>HRegionServer</code> which hands the details to the matching <code>HRegion</code> instance. The first step is now to decide if the data should be first written to the "Write-Ahead-Log" (WAL) represented by the <code>HLog</code> class. The decision is based on the flag set by the client using <code>Put.writeToWAL(boolean)</code> method. The WAL is a standard Hadoop <code>SequenceFile</code> (although it is currently discussed if that should not be changed to a more HBase suitable file format) and it stores <code>HLogKey</code>'s. These keys contain a sequential number as well as the actual data and are used to replay not yet persisted data after a server crash.<br /><br />Once the data is written (or not) to the WAL it is placed in the <code>MemStore</code>. At the same time it is checked if the <code>MemStore</code> is full and in that case a flush to disk is requested. When the request is served by a separate thread in the <code>HRegionServer</code> it writes the data to an <code>HFile</code> located in the HDFS. It also saves the last written sequence number so the system knows what was persisted so far. Let"s have a look at the files now.<br /><br /><u>Files</u><br /><br />HBase has a configurable root directory in the HDFS but the default is <code>/hbase</code>. You can simply use the DFS tool of the Hadoop command line tool to look at the various files HBase stores.<br /><br /><pre class="brush:plain">$ hadoop dfs -lsr /hbase/docs<br />...<br />drwxr-xr-x - hadoop supergroup 0 2009-09-28 14:22 /hbase/.logs<br />drwxr-xr-x - hadoop supergroup 0 2009-10-15 14:33 /hbase/.logs/srv1.foo.bar,60020,1254172960891<br />-rw-r--r-- 3 hadoop supergroup 14980 2009-10-14 01:32 /hbase/.logs/srv1.foo.bar,60020,1254172960891/hlog.dat.1255509179458<br />-rw-r--r-- 3 hadoop supergroup 1773 2009-10-14 02:33 /hbase/.logs/srv1.foo.bar,60020,1254172960891/hlog.dat.1255512781014<br />-rw-r--r-- 3 hadoop supergroup 37902 2009-10-14 03:33 /hbase/.logs/srv1.foo.bar,60020,1254172960891/hlog.dat.1255516382506<br />...<br />-rw-r--r-- 3 hadoop supergroup 137648437 2009-09-28 14:20 /hbase/docs/1905740638/oldlogfile.log<br />...<br />drwxr-xr-x - hadoop supergroup 0 2009-09-27 18:03 /hbase/docs/999041123<br />-rw-r--r-- 3 hadoop supergroup 2323 2009-09-01 23:16 /hbase/docs/999041123/.regioninfo<br />drwxr-xr-x - hadoop supergroup 0 2009-10-13 01:36 /hbase/docs/999041123/cache<br />-rw-r--r-- 3 hadoop supergroup 91540404 2009-10-13 01:36 /hbase/docs/999041123/cache/5151973105100598304<br />drwxr-xr-x - hadoop supergroup 0 2009-09-27 18:03 /hbase/docs/999041123/contents<br />-rw-r--r-- 3 hadoop supergroup 333470401 2009-09-27 18:02 /hbase/docs/999041123/contents/4397485149704042145<br />drwxr-xr-x - hadoop supergroup 0 2009-09-04 01:16 /hbase/docs/999041123/language<br />-rw-r--r-- 3 hadoop supergroup 39499 2009-09-04 01:16 /hbase/docs/999041123/language/8466543386566168248<br />drwxr-xr-x - hadoop supergroup 0 2009-09-04 01:16 /hbase/docs/999041123/mimetype<br />-rw-r--r-- 3 hadoop supergroup 134729 2009-09-04 01:16 /hbase/docs/999041123/mimetype/786163868456226374<br />drwxr-xr-x - hadoop supergroup 0 2009-10-08 22:45 /hbase/docs/999882558<br />-rw-r--r-- 3 hadoop supergroup 2867 2009-10-08 22:45 /hbase/docs/999882558/.regioninfo<br />drwxr-xr-x - hadoop supergroup 0 2009-10-09 23:01 /hbase/docs/999882558/cache<br />-rw-r--r-- 3 hadoop supergroup 45473255 2009-10-09 23:01 /hbase/docs/999882558/cache/974303626218211126<br />drwxr-xr-x - hadoop supergroup 0 2009-10-12 00:37 /hbase/docs/999882558/contents<br />-rw-r--r-- 3 hadoop supergroup 467410053 2009-10-12 00:36 /hbase/docs/999882558/contents/2507607731379043001<br />drwxr-xr-x - hadoop supergroup 0 2009-10-09 23:02 /hbase/docs/999882558/language<br />-rw-r--r-- 3 hadoop supergroup 541 2009-10-09 23:02 /hbase/docs/999882558/language/5662037059920609304<br />drwxr-xr-x - hadoop supergroup 0 2009-10-09 23:02 /hbase/docs/999882558/mimetype<br />-rw-r--r-- 3 hadoop supergroup 84447 2009-10-09 23:02 /hbase/docs/999882558/mimetype/2642281535820134018<br />drwxr-xr-x - hadoop supergroup 0 2009-10-14 10:58 /hbase/docs/compaction.dir<br /></pre><br /><br />The first set of files are the log files handled by the <code>HLog</code> instances and which are created in a directory called <code>.logs</code> underneath the HBase root directory. Then there is another subdirectory for each <code>HRegionServer</code> and then a log for each <code>HRegion</code>. <br /><br />Next there is a file called <code>oldlogfile.log</code> which you may not even see on your cluster. They are created by one of the exceptions I mentioned earlier as far as file access is concerned. They are a result of so called "log splits". When the <code>HMaster</code> starts and finds that there is a log file that is not handled by a <code>HRegionServer</code> anymore it splits the log copying the <code>HLogKey</code>'s to the new regions they should be in. It places them directly in the region's directory in a file named <code>oldlogfile.log</code>. Now when the respective <code>HRegion</code> is instantiated it reads these files and inserts the contained data into its local <code>MemStore</code> and starts a flush to persist the data right away and delete the file. <br /><br />Note: Sometimes you may see left-over <code>oldlogfile.log.old</code> (yes, there is another .old at the end) which are caused by the <code>HMaster</code> trying repeatedly to split the log and found there was already another split log in place. At that point you have to consult with the <code>HRegionServer</code> or <code>HMaster</code> logs to see what is going on and if you can remove those files. I found at times that they were empty and therefore could safely be removed.<br /><br />The next set of files are the actual regions. Each region name is encoded using a Jenkins Hash function and a directory created for it. The reason to hash the region name is because it may contain characters that cannot be used in a path name in DFS. The Jenkins Hash always returns legal characters, as simple as that. So you get the following path structure:<br /><br /><code>/hbase/<tablename>/<encoded-regionname>/<column-family>/<filename></code><br /><br />In the root of the region directory there is also a <code>.regioninfo</code> holding meta data about the region. This will be used in the future by an HBase <code>fsck</code> utility (see <a href="http://issues.apache.org/jira/browse/HBASE-7">HBASE-7</a>) to be able to rebuild a broken <code>.META.</code> table. For a first usage of the region info can be seen in <a href="http://issues.apache.org/jira/browse/HBASE-1867">HBASE-1867</a>. <br /><br />In each column-family directory you can see the actual data files, which I explain in the following section in detail. <br /><br />Something that I have not shown above are split regions with their initial daughter reference files. When a data file within a region grows larger than the configured <code>hbase.hregion.max.filesize</code> then the region is split in two. This is done initially very quickly because the system simply creates two reference files in the new regions now supposed to host each half. The name of the reference file is an ID with the hashed name of the referenced region as a postfix, e.g. <code>1278437856009925445.3323223323</code>. The reference files only hold little information: the key the original region was split at and wether it is the top or bottom reference. Of note is that these references are then used by the <code>HalfHFileReader</code> class (which I also omitted from the big picture above as it is only used temporarily) to read the original region data files. Only upon a compaction the original files are rewritten into separate files in the new region directory. This also removes the small reference files as well as the original data file in the original region. <br /><br />And this also concludes the file dump here, the last thing you see is a <code>compaction.dir</code> directory in each table directory. They are used when splitting or compacting regions as noted above. They are usually empty and are used as a scratch area to stage the new data files before swapping them into place.<br /><br /><u>HFile</u><br /><br />So we are now at a very low level of HBase's architecture. <code>HFile</code>'s (kudos to Ryan Rawson) are the actual storage files, specifically created to serve one purpose: store HBase's data fast and efficiently. They are apparently based on Hadoop's <code>TFile</code> (see <a href="http://issues.apache.org/jira/browse/HADOOP-3315">HADOOP-3315</a>) and mimic the SSTable format used in Googles BigTable architecture. The previous use of Hadoop's <code>MapFile</code>'s in HBase proved to be not good enough performance wise. So how do the files look like?<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_Cib_A77V54U/SteEzNS2qPI/AAAAAAAAAD4/z13-DGcA_qs/s1600-h/hfile.png"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 400px; height: 118px;" src="http://4.bp.blogspot.com/_Cib_A77V54U/SteEzNS2qPI/AAAAAAAAAD4/z13-DGcA_qs/s400/hfile.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5392925094076393714" /></a> The files have a variable length, the only fixed blocks are the FileInfo and Trailer block. As the picture shows it is the Trailer that has the pointers to the other blocks and it is written at the end of persisting the data to the file, finalizing the now immutable data store. The Index blocks record the offsets of the Data and Meta blocks. Both the Data and the Meta blocks are actually optional. But you most likely you would always find data in a data store file.<br /><br />How is the block size configured? It is driven solely by the <code>HColumnDescriptor</code> which in turn is specified at table creation time by the user or defaults to reasonable standard values. Here is an example as shown in the master web based interface: <br /><br /><code>{NAME => 'docs', FAMILIES => [{NAME => 'cache', COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'false'}, {NAME => 'contents', COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'false'}, ...<br /></code><br /><br />The default is "64KB" (or 65535 bytes). Here is what the HFile JavaDoc explains:<br /><br /><blockquote>"Minimum block size. We recommend a setting of minimum block size between 8KB to 1MB for general usage. Larger block size is preferred if files are primarily for sequential access. However, it would lead to inefficient random access (because there are more data to decompress). Smaller blocks are good for random access, but require more memory to hold the block index, and may be slower to create (because we must flush the compressor stream at the conclusion of each data block, which leads to an FS I/O flush). Further, due to the internal caching in Compression codec, the smallest possible block size would be around 20KB-30KB."</blockquote><br /><br />So each block with its prefixed "magic" header contains either plain or compressed data. How that looks like we will have a look at in the next section.<br /><br />One thing you may notice is that the default block size for files in DFS is 64MB, which is 1024 times what the <code>HFile</code> default block size is. So the HBase storage files blocks do <u>not</u> match the Hadoop blocks. Therefore you have to think about both parameters separately and find the sweet spot in terms of performance for your particular setup.<br /><br />One option in the HBase configuration you may see is <code>hfile.min.blocksize.size</code>. It seems to be only used during migration from earlier versions of HBase (since it had no block file format) and when directly creating <code>HFile</code> during bulk imports for example.<br /><br />So far so good, but how can you see if a <code>HFile</code> is OK or what data it contains? There is an App for that!<br /><br />The <code>HFile.main()</code> method provides the tools to dump a data file:<br /><br /><pre class="brush:plain">$ hbase org.apache.hadoop.hbase.io.hfile.HFile<br />usage: HFile [-f <arg>] [-v] [-r <arg>] [-a] [-p] [-m] [-k]<br /> -a,--checkfamily Enable family check<br /> -f,--file <arg> File to scan. Pass full-path; e.g.<br /> hdfs://a:9000/hbase/.META./12/34<br /> -k,--checkrow Enable row order check; looks for out-of-order keys<br /> -m,--printmeta Print meta data of file<br /> -p,--printkv Print key/value pairs<br /> -r,--region <arg> Region to scan. Pass region name; e.g. '.META.,,1'<br /> -v,--verbose Verbose output; emits file and meta data delimiters<br /></pre><br />Here is an example of what the output will look like (shortened here):<br /><br /><pre class="brush:plain">$ hbase org.apache.hadoop.hbase.io.hfile.HFile -v -p -m -f \<br /> hdfs://srv1.foo.bar:9000/hbase/docs/999882558/mimetype/2642281535820134018<br /><br />Scanning -> hdfs://srv1.foo.bar:9000/hbase/docs/999882558/mimetype/2642281535820134018<br />...<br />K: \x00\x04docA\x08mimetype\x00\x00\x01\x23y\x60\xE7\xB5\x04 V: text\x2Fxml<br />K: \x00\x04docB\x08mimetype\x00\x00\x01\x23x\x8C\x1C\x5E\x04 V: text\x2Fxml<br />K: \x00\x04docC\x08mimetype\x00\x00\x01\x23xz\xC08\x04 V: text\x2Fxml<br />K: \x00\x04docD\x08mimetype\x00\x00\x01\x23y\x1EK\x15\x04 V: text\x2Fxml<br />K: \x00\x04docE\x08mimetype\x00\x00\x01\x23x\xF3\x23n\x04 V: text\x2Fxml<br />Scanned kv count -> 1554<br /><br />Block index size as per heapsize: 296<br />reader=hdfs://srv1.foo.bar:9000/hbase/docs/999882558/mimetype/2642281535820134018, \<br /> compression=none, inMemory=false, \<br /> firstKey=US6683275_20040127/mimetype:/1251853756871/Put, \<br /> lastKey=US6684814_20040203/mimetype:/1251864683374/Put, \<br /> avgKeyLen=37, avgValueLen=8, \<br /> entries=1554, length=84447<br />fileinfoOffset=84055, dataIndexOffset=84277, dataIndexCount=2, metaIndexOffset=0, \<br /> metaIndexCount=0, totalBytes=84055, entryCount=1554, version=1<br />Fileinfo:<br />MAJOR_COMPACTION_KEY = \xFF<br />MAX_SEQ_ID_KEY = 32041891<br />hfile.AVG_KEY_LEN = \x00\x00\x00\x25<br />hfile.AVG_VALUE_LEN = \x00\x00\x00\x08<br />hfile.COMPARATOR = org.apache.hadoop.hbase.KeyValue\x24KeyComparator<br />hfile.LASTKEY = \x00\x12US6684814_20040203\x08mimetype\x00\x00\x01\x23x\xF3\x23n\x04<br /></pre><br />The first part is the actual data stored as <code>KeyValue</code> pairs, explained in detail in the next section. The second part dumps the internal <code>HFile.Reader</code> properties as well as the Trailer block details and finally the FileInfo block values. This is a great way to check if a data file is still healthy. <br /><br /><u>KeyValue's</u><br /><br />In essence each <code>KeyValue</code> in the <code>HFile</code> is simply a low-level byte array that allows for "zero-copy" access to the data, even with lazy or custom parsing if necessary. How are the instances arranged?<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_Cib_A77V54U/StZMrzaKufI/AAAAAAAAADo/ZhK7bGoJdMQ/s1600-h/KeyValue.png"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 400px; height: 62px;" src="http://2.bp.blogspot.com/_Cib_A77V54U/StZMrzaKufI/AAAAAAAAADo/ZhK7bGoJdMQ/s400/KeyValue.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5392581919240796658" /></a> The structure starts with two fixed length numbers indicating the size of the key and the value part. With that info you can offset into the array to for example get direct access to the value, ignoring the key - if you know what you are doing. Otherwise you can get the required information from the key part. Once parsed into a <code>KeyValue</code> object you have getters to access the details.<br /><br />Note: One thing to watch out for is the difference between <code>KeyValue.getKey()</code> and <code>KeyValue.getRow()</code>. I think for me the confusion arose from referring to "row keys" as the primary key to get a row out of HBase. That would be the latter of the two methods, i.e. <code>KeyValue.getRow()</code>. The former simply returns the complete byte array part representing the raw "key" as colored and labeled in the diagram. <br /><br />This concludes my analysis of the HBase storage architecture. I hope it provides a starting point for your own efforts to dig into the grimy details. Have fun!<br /><br /><b>Update:</b> Slightly updated with more links to JIRA issues. Also added Zookeeper to be more precise about the current mechanisms to look up a region.<br /><br /><b>Update 2:</b> Added details about region references.<br /><br /><b>Update 3:</b> Added more details about region lookup as requested.Unknownnoreply@blogger.com42tag:blogger.com,1999:blog-860423771829255614.post-23153558573133747482009-10-12T13:20:00.000-07:002009-12-14T01:19:24.582-08:00Hive vs. PigWhile I was looking at <a href="http://wiki.apache.org/hadoop/Hive">Hive</a> and <a href="http://hadoop.apache.org/pig/">Pig</a> for processing large amounts of data without the need to write MapReduce code I found that there is no easy way to compare them against each other without reading into both in greater detail.<br />
<br />
In this post I am trying to give you a 10,000ft view of both and compare some of the more prominent and interesting features. The following table - which is discussed below - compares what I deemed to be such features:<br />
<br />
<table border="1" cellspacing="0" cellpadding="0" style="border-collapse:collapse;border:none; "><tbody>
<tr style="height:.3in"> <td width="335" style="width:251.6pt;border-top:solid black 1.0pt;border-left:solid black 1.0pt; border-bottom:none;border-right:solid windowtext 1.0pt; background:black;padding:0in 5.4pt 0in 5.4pt; height:.3in"> <p style="margin-bottom:0in;margin-bottom:.0001pt;line-height: normal;"><b><span style="font-size:9.0pt;font-family:"Verdana","sans-serif";color:white;">Feature</span></b></p></td> <td width="156" style="width:117.0pt;border:solid windowtext 1.0pt;border-left: none; background:black;padding:0in 5.4pt 0in 5.4pt; height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal;"><b><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"; color:white;">Hive</span></b></p></td> <td width="147" style="width:110.2pt;border:solid windowtext 1.0pt;border-left: none; background:black;padding:0in 5.4pt 0in 5.4pt; height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal;"><b><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"; color:white;">Pig</span></b></p></td> </tr>
<tr style="height:.3in"> <td width="335" style="width:251.6pt;border:solid black 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p style="margin-bottom:0in;margin-bottom:.0001pt;line-height: normal;"><b><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">Language</span></b></p></td> <td width="156" style="width:117.0pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal;"><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">SQL-like</span></p></td> <td width="147" style="width:110.2pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal;"><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">PigLatin</span></p></td> </tr>
<tr style="height:.3in"> <td width="335" style="width:251.6pt;border-top:none;border-left:solid black 1.0pt; border-bottom:none;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt; height:.3in"> <p style="margin-bottom:0in;margin-bottom:.0001pt;line-height: normal;"><b><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">Schemas/Types</span></b></p></td> <td width="156" style="width:117.0pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal"><span style="font-size:9.0pt; font-family:"Verdana","sans-serif"">Yes (explicit)</span></p></td> <td width="147" style="width:110.2pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal"><span style="font-size:9.0pt; font-family:"Verdana","sans-serif"">Yes (implicit)</span></p></td> </tr>
<tr style="height:.3in"> <td width="335" style="width:251.6pt;border:solid black 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p style="margin-bottom:0in;margin-bottom:.0001pt;line-height: normal;"><b><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">Partitions</span></b></p></td> <td width="156" style="width:117.0pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal;"><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">Yes</span></p></td> <td width="147" style="width:110.2pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal;"><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">No</span></p></td> </tr>
<tr style="height:.3in"> <td width="335" style="width:251.6pt;border-top:none;border-left:solid black 1.0pt; border-bottom:none;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt; height:.3in"> <p style="margin-bottom:0in;margin-bottom:.0001pt;line-height: normal;"><b><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">Server</span></b></p></td> <td width="156" style="width:117.0pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal"><span style="font-size:9.0pt; font-family:"Verdana","sans-serif"">Optional (Thrift)</span></p></td> <td width="147" style="width:110.2pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal"><span style="font-size:9.0pt; font-family:"Verdana","sans-serif"">No</span></p></td> </tr>
<tr style="height:.3in"> <td width="335" style="width:251.6pt;border:solid black 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p style="margin-bottom:0in;margin-bottom:.0001pt;line-height: normal;"><b><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">User Defined Functions (UDF)</span></b></p></td> <td width="156" style="width:117.0pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal;"><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">Yes (Java)</span></p></td> <td width="147" style="width:110.2pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal;"><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">Yes (Java)</span></p></td> </tr>
<tr style="height:.3in"> <td width="335" style="width:251.6pt;border-top:none;border-left:solid black 1.0pt; border-bottom:none;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt; height:.3in"> <p style="margin-bottom:0in;margin-bottom:.0001pt;line-height: normal;"><b><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">Custom Serializer/Deserializer</span></b></p></td> <td width="156" style="width:117.0pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal"><span style="font-size:9.0pt; font-family:"Verdana","sans-serif"">Yes</span></p></td> <td width="147" style="width:110.2pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal"><span style="font-size:9.0pt; font-family:"Verdana","sans-serif"">Yes</span></p></td> </tr>
<tr style="height:.3in"> <td width="335" style="width:251.6pt;border:solid black 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p style="margin-bottom:0in;margin-bottom:.0001pt;line-height: normal;"><b><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">DFS Direct Access</span></b></p></td> <td width="156" style="width:117.0pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal;"><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">Yes (implicit)</span></p></td> <td width="147" style="width:110.2pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal;"><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">Yes (explicit)</span></p></td> </tr>
<tr style="height:.3in"> <td width="335" style="width:251.6pt;border-top:none;border-left:solid black 1.0pt; border-bottom:none;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt; height:.3in"> <p style="margin-bottom:0in;margin-bottom:.0001pt;line-height: normal;"><b><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">Join/Order/Sort</span></b></p></td> <td width="156" style="width:117.0pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal"><span style="font-size:9.0pt; font-family:"Verdana","sans-serif"">Yes</span></p></td> <td width="147" style="width:110.2pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal"><span style="font-size:9.0pt; font-family:"Verdana","sans-serif"">Yes</span></p></td> </tr>
<tr style="height:.3in"> <td width="335" style="width:251.6pt;border:solid black 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p style="margin-bottom:0in;margin-bottom:.0001pt;line-height: normal;"><b><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">Shell</span></b></p></td> <td width="156" style="width:117.0pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal;"><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">Yes</span></p></td> <td width="147" style="width:110.2pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal;"><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">Yes</span></p></td> </tr>
<tr style="height:.3in"> <td width="335" style="width:251.6pt;border-top:none;border-left:solid black 1.0pt; border-bottom:none;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt; height:.3in"> <p style="margin-bottom:0in;margin-bottom:.0001pt;line-height: normal;"><b><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">Streaming</span></b></p></td> <td width="156" style="width:117.0pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal"><span style="font-size:9.0pt; font-family:"Verdana","sans-serif"">Yes</span></p></td> <td width="147" style="width:110.2pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal"><span style="font-size:9.0pt; font-family:"Verdana","sans-serif"">Yes</span></p></td> </tr>
<tr style="height:.3in"> <td width="335" style="width:251.6pt;border:solid black 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p style="margin-bottom:0in;margin-bottom:.0001pt;line-height: normal;"><b><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">Web Interface</span></b></p></td> <td width="156" style="width:117.0pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal;"><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">Yes</span></p></td> <td width="147" style="width:110.2pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal;"><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">No</span></p></td> </tr>
<tr style="height:.3in"> <td width="335" style="width:251.6pt;border-top:none;border-left:solid black 1.0pt; border-bottom:solid black 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p style="margin-bottom:0in;margin-bottom:.0001pt;line-height: normal;"><b><span style="font-size:9.0pt;font-family:"Verdana","sans-serif"">JDBC/ODBC</span></b></p></td> <td width="156" style="width:117.0pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal"><span style="font-size:9.0pt; font-family:"Verdana","sans-serif"">Yes (limited)</span></p></td> <td width="147" style="width:110.2pt;border-top:none;border-left:none; border-bottom:solid windowtext 1.0pt;border-right:solid windowtext 1.0pt; padding:0in 5.4pt 0in 5.4pt;height:.3in"> <p align="center" style="margin-bottom:0in;margin-bottom:.0001pt; text-align:center;line-height:normal"><span style="font-size:9.0pt; font-family:"Verdana","sans-serif"">No</span></p></td></tr>
</tbody></table><br />
<br />
Let us look now into each of these with a bit more detail.<br />
<br />
<u>General Purpose</u><br />
<br />
The question is "What does Hive or Pig solve?". Both - and I think this lucky for us in regards to comparing them - have a very similar goal. They try to ease the complexity of writing MapReduce jobs in a programming language like Java by giving the user a set of tools that they may be more familiar with (more on this below). The raw data is stored in Hadoop's HDFS and can be any format although natively it usually is a TAB separated text file, while internally they also may make use of Hadoop's SequenceFile file format. The idea is to be able to parse the raw data file, for example a web server log file, and use the contained information to slice and dice them into what is needed for business needs. Therefore they provide means to aggregate fields based on specific keys. In the end they both emit the result again in either text or a custom file format. Efforts are also underway to have both use other systems as a source for data, for example HBase.<br />
<br />
The features I am comparing are chosen pretty much at random because they stood out when I read into each of these two frameworks. So keep in mind that this is a subjective list.<br />
<br />
<u>Language</u><br />
<br />
Hive lends itself to SQL. But since we can only read already existing files in HDFS it is lacking UPDATE or DELETE support for example. It focuses primarily on the query part of SQL. But even there it has its own spin on things to reflect better the underlaying MapReduce process. Overall is seems that someone familiar with SQL can very quickly learn Hive's version of it and get results fast.<br />
<br />
Pig on the other hand looks more like a very simplistic scripting language. As with those (and this is a nearly religious topic) some are more intuitive and some are less. As with PigLatin I was able to see what the samples do, but lacking the full knowledge of its syntax I was somewhat finding myself thinking if I really would be able to get what I needed without too many trial-and-error loops. Sure, the Hive SQL needs probably as many iterations to fully grasp - but there is at least a greater understanding of what to expect.<br />
<br />
<u>Schemas/Types</u><br />
<br />
Hive uses once more a specific variation of SQL's Data Definition Language (DDL). It defines the "tables" beforehand and stores the schema in a either shared or local database. Any JDBC offering will do, but it also comes with a built in Derby instance to get you started quickly. If the database is local then only you can run specific Hive commands. If you share the database then others can also run these - or would have to set up their own local database copy. Types are also defined upfront and supported types are INT, BIGINT, BOOLEAN, STRING and so on. There are also array types that lets you handle specific fields in the raw data files as a group.<br />
<br />
Pig has no such metadata database. Datatypes and schemas are defined within each script. Types furthermore are usually automatically determined by their use. So if you use a field as an Integer it is handled that way by Pig. You do have the option though to override it and have explicit type definitions, again within the script you need them. Pig has a similar set of types compared to Hive. For example it also has an array type called "bag".<br />
<br />
<u>Partitions</u><br />
<br />
Hive has a notion of partitions. They are basically subdirectories in HDFS. It allows for example processing a subset of the data by alphabet or date. It is up to the user to create these "partitions" as they are not enforced nor required.<br />
<br />
Pig does not seem to have such a feature. It may be that filters can achieve the same but it is not immediately obvious to me.<br />
<br />
<u>Server</u><br />
<br />
Hive can start an optional server, which is allegedly Thrift based. With the server I presume you can send queries from anywhere to the Hive server which in turn executes them.<br />
<br />
Pig does not seem to have such a facility yet.<br />
<br />
<u>User Defined Functions</u><br />
<br />
Hive and Pig allow for user functionality by supplying Java code to the query process. These functions can add any additional feature that is required to crunch the numbers as required.<br />
<br />
<u>Custom Serializer/Deserializer</u><br />
<br />
Again, both Hive and Pig allow for custom Java classes that can read or write any file format required. I also assume that is how it connects to HBase eventually (just a guess). You can write a parser for Apache log files or, for example, the binary <a href="http://github.com/larsgeorge/ulog-reader">Tokyo Tyrant Ulog</a> format. The same goes for the output, write a database output class and you can write the results back into a database.<br />
<br />
<u>DFS Direct Access</u><br />
<br />
Hive is smart about how to access the raw data. A "select * from table limit 10" for example does a direct read from the file. If the query is too complicated it will fall back to use a full MapReduce run to determine the outcome, just as expected.<br />
<br />
With Pig I am not sure if it does the same to speed up simple PigLatin scripts. At least it does not seem to be mentioned anywhere as an important feature.<br />
<br />
<u>Join/Order/Sort</u><br />
<br />
Hive and Pig have support for joining, ordering or sorting data dynamically. They perform the same purpose in both pretty allowing you to aggregate and sort the result as is needed. Pig also has a COGROUP feature that allows you to do OUTER JOIN's and so on. I think this is where you spent most of your time with either package - especially when you start out. But from a cursory look it seems both can do pretty much the same.<br />
<br />
<u>Shell</u><br />
<br />
Both Hive and Pig have a shell that allows you to query specific things or run the actual queries. Pig also passes on DFS commands such as "cat" to allow you to quickly check what an outcome of a specific PigLatin script was.<br />
<br />
<u>Streaming</u><br />
<br />
Once more, both frameworks seem to provide streaming interfaces so that you can process data with external tools or languages, such as Ruby or Python. How the streaming performs I do not know and if they affect them differently. This is for you to tell me :)<br />
<br />
<u>Web Interface</u><br />
<br />
Only Hive has a <a href="http://wiki.apache.org/hadoop/Hive/HiveWebInterface">web interface</a> or UI that can be used to visualize the various schemas and issue queries. This is different to the above mentioned Server as it is an interactive web UI for a human operator. The Hive Server is for use from another programming or scripting language for example.<br />
<br />
<u>JDBC/ODBC</u><br />
<br />
Another Hive only feature is the availability of a - again limited functionality - JDBC/ODBC driver. It is another way for programmers to use Hive without having to bother with its shell or web interface, or even the Hive Server. Since only a subset of features is available it will require small adjustments on the programmers side of things but otherwise seems like a nice-to-have feature.<br />
<br />
<u>Conclusion</u><br />
<br />
Well, it seems to me that both can help you achieve the same goals, while Hive comes more natural to database developers and Pig to "script kiddies" (just kidding). Hive has more features as far as access choices are concerned. They also have reportedly roughly the same amount of committers in each project and are going strong development wise.<br />
<br />
This is it from me. Do you have a different opinion or comment on the above then please feel free to reply below. Over and out!Unknownnoreply@blogger.com8tag:blogger.com,1999:blog-860423771829255614.post-36505047810918751912009-05-26T05:12:00.000-07:002009-05-26T11:05:56.124-07:00HBase Schema ManagerAs already mentioned in one of my <a href="http://www.larsgeorge.com/2009/01/changing-hbase-tables-in-code.html">previous</a> posts, HBase at times makes it difficult to maintain or even create a new table structure. Imagine you have a running cluster and quite an elaborate table setup. Now you want to to create a backup cluster for load balancing and general tasks like reporting etc. How do you get all the values from one system into the other?<br /><br />While you can use various <a href="http://hadoop.apache.org/hbase/docs/r0.19.0/api/org/apache/hadoop/hbase/mapred/package-summary.html">examples</a> that help you backing up the data and eventually restore it, how do you "clone" the table schemas?<br /><br />Or imagine you have an existing system like the one we talked about above and you simply want to change a few things around. With an RDBMS you can save the required steps in a DDL statement and execute it on the server - or the backup server etc. But with HBase there is now DDL or even the possibility of executing pre-built scripts against a running cluster.<br /><br />What I described in my <a href="http://www.larsgeorge.com/2009/01/changing-hbase-tables-in-code.html">previous</a> post was a why to store the table schemas into an XML configuration file and run that against a cluster. The code handles adding new tables and more importantly the addition, removal and modification of column families for any named table. <br /><br />I have put this all into a separate Java application that may be useful to you. You can get it from my GitHub <a href="http://github.com/larsgeorge/hbase-schema-manager/tree/master">repository</a>. It is really simple to use, you create an XML based configuration file, for example:<br /><pre class="brush:xml"><?xml version="1.0" encoding="UTF-8"?><br /><configurations><br /> <configuration><br /> <name>foo</name><br /> <description>Configuration for the FooBar HBase cluster.</description><br /> <hbase_master>foo.bar.com:60000</hbase_master><br /> <schema><br /> <table><br /> <name>test</name><br /> <description>Test table.</description><br /> <column_family><br /> <name>sample</name><br /> <description>Sample column.</description><br /> <!-- Default: 3 --><br /> <max_versions>1</max_versions><br /> <!-- Default: DEFAULT_COMPRESSION_TYPE --><br /> <compression_type/><br /> <!-- Default: false --><br /> <in_memory/><br /> <!-- Default: false --><br /> <block_cache_enabled/><br /> <!-- Default: -1 (forever) --><br /> <time_to_live/><br /> <!-- Default: 2147483647 --><br /> <max_value_length/><br /> <!-- Default: DEFAULT_BLOOM_FILTER_DESCRIPTOR --><br /> <bloom_filter/><br /> </column_family><br /> </table><br /> </schema><br /> </configuration><br /></configurations></pre><br />Then all you have to do is run the application like so:<br /><br /><code>java -jar hbase-manager-1.0.jar schema.xml</code><br /><br />The "schema.xml" is the above XML configuration saved on your local machine. The output shows the steps performed:<br /><pre class="brush:plain">$ java -jar hbase-manager-1.0.jar schema.xml<br /> creating table test...<br /> table created<br />done.<br /></pre><br />You can also specify more options on the command line:<br /><pre class="brush:plain">usage: HbaseManager [<options>] <schema-xml-filename> [<config-name>]<br /> -l,--list lists all tables but performs no further action.<br /> -n,--nocreate do not create non-existent tables.<br /> -v,--verbose print verbose output.<br /></pre><br />If you use the "verbose" option you get more details:<br /><pre class="brush:plain">$ java -jar hbase-manager-1.0.jar -v schema.xml<br />schema filename: schema.xml<br />configuration used: null<br />using config number: default<br />table schemas read from config: <br /> [name -> test<br /> description -> Test table.<br /> columns -> {sample=name -> sample<br /> description -> Sample column.<br /> maxVersions -> 1<br /> compressionType -> NONE<br /> inMemory -> false<br /> blockCacheEnabled -> false<br /> maxValueLength -> 2147483647<br /> timeToLive -> -1<br /> bloomFilter -> false}]<br /> hbase.master -> foo.bar.com:60000<br /> authoritative -> true<br /> name -> test<br /> tableExists -> true<br /> changing table test...<br /> no changes detected!<br />done.<br /></pre><br />Finally you can use the "list" option to check initial connectivity and the successful changes:<br /><pre class="brush:plain">$ java -jar hbase-manager-1.0.jar -l schema.xml<br />tables found: 1<br /> test<br />done.<br /></pre><br />A few notes: First and most importantly, if you change a large table, i.e. one with thousands of regions, this process can take quite a long time. This is caused by the <code>enableTable()</code> call having to scan the complete .META. table to assign the regions to their respective region servers. There is possibly room for improvement in my little application to handle this better - suggestions welcome!<br /><br />Also, I do not have <span style="font-style:italic;">Bloom Filter</span> settings implemented, as this is still changing from 0.19 to 0.20. Once it has been finalized I will add support for it.<br /><br />If you do not specify a configuration name then the first one is used. Having more than one configuration allows you to have multiple clusters defined in one schema file and by specifying the name you can execute only a specific one when you need to.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-860423771829255614.post-4108275673641369152009-05-11T15:08:00.000-07:002009-05-19T14:19:36.340-07:00HBase MapReduce 101 - Part IIn this and the following posts I would like to take the opportunity to go into detail about the <a href="http://labs.google.com/papers/mapreduce.html">MapReduce</a> process as provided by <a href="http://hadoop.apache.org/core/docs/current/mapred_tutorial.html">Hadoop</a> but more importantly how it applies to <a href="http://hadoop.apache.org/hbase/">HBase</a>.<br /><br /><h3>MapReduce</h3><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_Cib_A77V54U/ShJ8K99N0fI/AAAAAAAAACY/aFbcbtIK4nI/s1600-h/MapReduce2.png"><img style="float:right; margin:0 0 10px 10px;cursor:pointer; cursor:hand;width: 400px; height: 340px;" src="http://1.bp.blogspot.com/_Cib_A77V54U/ShJ8K99N0fI/AAAAAAAAACY/aFbcbtIK4nI/s400/MapReduce2.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5337465036259316210" /></a><br />MapReduce as a process was designed to solve the problem of processing in excess of terabytes of data in a scalable way. There should be a way to design such a system that increases in performance linearly with the number of physical machines added. That is what MapReduce strives to do. It follows a divide-and-conquer approach by splitting the data located on a distributed file system so that the servers (or rather cpu's, or more modern "cores") available can access these pieces and process them as fast as they can. The problem with this approach is that you will have to consolidate the data at the end. Again, MapReduce has this built right into it.<br /><br />The above simplified image of the MapReduce process shows you how the data is processed. The first thing that happens is the <span style="font-style:italic;">split</span> which is responsible to divide the input data in reasonable size chunks that are then processed by one server at a time. This splitting has to be done somewhat smart to make best use of available servers and the infrastructure in general. In this example the data may be a very large log file that is divided into equal size pieces on line boundaries. This is OK for example for say Apache log files. Input data may also be binary though where you may have to write your own <code>getSplits()</code> method - but more on that below. <br /><br /><h3>Classes</h3><br />The above image also shows you the classes that are involved in the Hadoop implementation of MapReduce. Let's look at them and also at the specific implementations that HBase provides on top of those. <br /><br /><u>InputFormat</u><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_Cib_A77V54U/ShLU17tI4bI/AAAAAAAAADI/w1ah_hgGZpM/s1600-h/input.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 600px;" src="http://4.bp.blogspot.com/_Cib_A77V54U/ShLU17tI4bI/AAAAAAAAADI/w1ah_hgGZpM/s800/input.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5337562531412631986" /></a><br />The first class to deal with is the <code>InputFormat</code> class. It is responsible for two things. First is does the actual splitting of the input data as well as returning a <code>RecordReader</code> instance that defines the classes of the <span style="font-style:italic;">key</span> and <span style="font-style:italic;">value</span> objects as well as providing a <code>next()</code> method that is used to iterate over each input record. <br /><br />As far as HBase is concerned there is a special implementation called <code>TableInputFormatBase</code> as well as its subclass <code>TableInputFormat</code>. The former implements the majority of the functionality but remains abstract. The subclass is a light-weight concrete version of the TableInputFormat and is used by many supplied sample and real MapReduce classes.<br /><br />But most importantly these classed implement the full turn-key solution to scan a HBase table. You can provide the name of the table to scan and the columns you want to process during the Map phase. It splits the table into proper pieces for you and hands them over to the subsequent classes. There are quite a few tweaks which we will address below and in the following installments of this series.<br /><br />For now let's look at the other classes involved.<br /><br /><u>Mapper</u><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_Cib_A77V54U/ShKM4M07-nI/AAAAAAAAACo/x99sr5Evg7s/s1600-h/mapper.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 474px;" src="http://4.bp.blogspot.com/_Cib_A77V54U/ShKM4M07-nI/AAAAAAAAACo/x99sr5Evg7s/s800/mapper.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5337483405531282034" /></a><br />The <code>Mapper</code> class(es) are for the next stage of the MapReduce process and one of its namesakes. In this step each record read using the <code>RecordReader</code> is processed using the <code>map()</code> method. What is also visible somewhat from the first figure above is that the Mapper reads a specific type of key/value pair but emits possibly another. This is handy to convert the raw data into something more useful for further processing. <br /><br />Again, looking at HBase's extensions to this, you will find a <code>TableMap</code> class that is specific to iterating over a HBase table. Once specific implementation is the <code>IdentityTableMap</code> which is also a good example on how to add your own functionality to the supplied classes. The <code>TableMap</code> class itself does not implement anything but only adds the signatures of what the actual key/value pair classes are. The <code>IdentityTableMap</code> is simply passing on the records to the next stage of the processing.<br /><br /><u>Reducer</u><br /><br /> <a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_Cib_A77V54U/ShKPELsMsMI/AAAAAAAAACw/wwLwE9Ez9hY/s1600-h/reduce.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 504px;" src="http://1.bp.blogspot.com/_Cib_A77V54U/ShKPELsMsMI/AAAAAAAAACw/wwLwE9Ez9hY/s800/reduce.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5337485810407878850" /></a><br />The <code>Reduce</code> stage and class layout is very similar to the Mapper one explained above. This time we get the output of a Mapper class and process it after the data was <span style="font-style:italic;">shuffled</span> and <span style="font-style:italic;">sorted</span>. In the implicit shuffle between the Mapper and Reducer stages the intermediate data is copied from different Map to the Reduce servers and the sort combines the shuffled (copied) data so that the Reducer sees the intermediate data as a nicely sorted set where now each unique key (and that is something I will get back to later) is associated with all of the possible values it was found with. <br /><br /><u>OutputFormat</u><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_Cib_A77V54U/ShLQzvCHrwI/AAAAAAAAADA/FFzv3MsXg2I/s1600-h/output.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 580px;" src="http://2.bp.blogspot.com/_Cib_A77V54U/ShLQzvCHrwI/AAAAAAAAADA/FFzv3MsXg2I/s800/output.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5337558095604723458" /></a><br />The final stage is the OutputFormat class and its job to persist the data in various locations. There are specific implementations that allow output to files or to HBase tables in case of the <code>TableOutputFormat</code>. It uses a <code>RecordWriter</code> to write the data into the specific HBase output table. <br /><br />It is important to note the cardinality as well. While there are many Mappers handing records to many Reducers, there is only one OutputFormat that takes each output record from its Reducer subsequently. It is the final class handling the key/value pairs and writes them to their final destination, this being a file or a table.<br /><br />The name of the output table is specified when the job is created. Otherwise it does not add much more complexity. One rather significant thing it does is set the table's <span style="font-style:italic;">auto flush</span> to "false" and handles the buffer flushing implicitly. This helps a lot <a href="http://ryantwopointoh.blogspot.com/2009/01/performance-of-hbase-importing.html">speeding up</a> the import of large data sets.<br /><br /><h3>To Map or Reduce or Not Map or Reduce</h3><br />This is now a crucial point we are at deciding on how to process tables stored in HBase. From the above it seems that we simply use a TableInputFormat to feed through a s TableMap and TableReduce to eventually persist the data with a TableOutputFormat. But this may not be what you want when you deal with HBase. The question is if there is a better way to handle the process given certain specific architectural features HBase provides. Depending on your data source and target there are a few different scenarios we could think of.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_Cib_A77V54U/ShKfjYi6gGI/AAAAAAAAAC4/9y5pLXdhl_c/s1600-h/table.png"><img style="float:right; margin:0 0 10px 10px;cursor:pointer; cursor:hand;width: 400px; height: 268px;" src="http://4.bp.blogspot.com/_Cib_A77V54U/ShKfjYi6gGI/AAAAAAAAAC4/9y5pLXdhl_c/s400/table.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5337503938620588130" /></a> If you want to import a large set of data into a HBase table you can read the data using a Mapper and after aggregating it on a per key basis using a Reducer and finally writing it into a HBase table. This involves the whole MapReduce stack including the shuffle and sort using intermediate files. But what if you know that the data for example has a unique key? Why go through the extra step of copying and sorting when there is always just exactly one key/value pair? At this point you can ask yourself, wouldn't it be better if I could skip that whole reduce stage? The answer is <u>yes</u> you can! And you should as you can harvest the pure computational power of all CPU's to crunch the data and writing it at top IO speed to its final target. <br /><br />As you can see from the matrix, there are quite a few scenarios where you can decide if you want Map only or both, Map and Reduce. But when it comes to handling HBase tables as sources and targets there are a few exceptions to this rule and I highlighted them accordingly. <br /><br /><u>Same or different Tables</u><br /><br />This is the an important distinction. The bottom line is, when you read a table in the Map stage you should consider not writing back to that very same table in the same process. It could on one hand hinder the proper distribution of regions across the servers (open scanners block regions splits) and on the other hand you may or may not see the new data as you scan. But when you read from one table and write to another then you can do that in a single stage. So for two different tables you can write your table updates directly in the <code>TableMap.map()</code> while with the same table you must write the same code in the <code>TableReduce.reduce()</code> - or in its <code>TableOutputFormat</code> (or even better simply use that class as is and you are done). The reason is that the Map stage completely reads a table and then passes the data on in intermediate files to the Reduce stage. In turn this means that the Reducer reads from the distributed file system (DFS) and writes into the now idle HBase table. And all is well.<br /><br />All of the above are simply recommendations based on what is currently available with HBase. There is of course no reason not to scan and modify a table in the same process. But to avoid certain non-deterministic issue I personally would not recommend this - especially if you are new to HBase. Maybe this also could be taken into consideration when designing your HBase tables. Maybe you separate distinct data into two tables or separate column families so you can scan one table while changing the other. <br /><br /><u>Key Distribution</u><br /><br />Another specific requirement for an effective import is to have a random keys as they are read. While this will be difficult if you scan a table in the Map phase, as keys are sorted, you may be able to make use of this when reading from a raw data file. Instead of leaving the key the offset of the file, as created by the <code>TextOutputFormat</code> for example, you could simply replace the rather useless offset with a random key. This will guarantee that the data is spread across all servers more evenly. Especially the <code>HRegionServers</code> will be very thankful as they each host as set of regions and random keys makes for a random load on these regions. <br /><br />Of course this depends on how the data is written to the raw files or how the real row keys are computed, but still a very valuable thing to keep in mind.<br /><br />In the next post I will show you how to import data from a raw data file into a HBase table and how you eventually process the data in the HBase table. We will address questions like how many mappers and/or reducers are needed and how can I improve import and processing performance. <br /><br />Until then, have a good day!Unknownnoreply@blogger.com4tag:blogger.com,1999:blog-860423771829255614.post-16035460210821416392009-05-03T11:09:00.000-07:002009-11-19T03:13:45.935-08:00European HBase Ambassador<p>I am on a mission! The mission is to spread the word on HBase. There are only a few choices when it comes to large scale data storage solutions. What I am referring to is not your vertical scale, big-iron relational database system. The time is now for a <a href="http://www.cringely.com/2009/05/the-sequel-dilemma/">paradigm shift</a>.</p><p>There are <a href="http://www.metabrew.com/article/anti-rdbms-a-list-of-distributed-key-value-stores/">various</a> key/value or more structured stores that strive to achieve the same - but getting proper information is often a big challenge. Either there is no proper examples, or use-cases for that matter. Even worse are performance details, the usual rap being "all depends!" - of course it does. Without real examples it is difficult to determine if a suitable system design wise is also holding up to the task at hand. How many servers are needed? How should the hardware stack be organized?</p><p>Here is where I feel I can help. I am using <a href="http://hadoop.apache.org/hbase/">HBase</a> for over a year now (started prototyping in late 2007), in production with three clusters spread over more than 50 servers. And I pretty much set them up from plugging in the hardware to the designing and running the cluster and the system on top of that. While this is in itself nothing special, I feel that I gained a lot of experience with HBase. I also feel that it could be very helpful to others that are thinking about HBase and how it may help them.</p><p>So I hereby declare myself to be a <span style="font-weight:bold;">European HBase Ambassador</span>. Mind you, this is no official title. The purpose is to help furthering the adoption of HBase in research and/or commercial projects. So what does this "position" entail? I offer this:<br />
<blockquote>For the cost of travel (and if necessary accommodation) I will present any aspect of HBase in production to whoever is interested.</blockquote>Yes, I will come to you if you ask, no matter where in Europe (or beyond). I will <a href="http://www.larsgeorge.com/2009/03/hbase-vs-couchdb-in-berlin.html">present</a> on the internals of HBase, its API and so on to developers or higher concepts to architects. Or to management on a white paper level. You want to know about HBase and how it can help you? Let me know and I will show you.</p><p>Why me? Besides what I mentioned above I have over 13 years experience in software engineering and am responsible as the CTO at <a href="http://www.worldlingo.com">WorldLingo</a> which for example is the sole provider for all machine translations in Microsoft Office for Windows and MacOS - given the text is longer than a few words because otherwise the internal word dictionaries are taking preference. I write and speak German (native) and English fluently. Last but not least I have regular contact with the core developers of HBase and am a contributor myself - as much as time allows.</p><p>So there you have it. This is my offer. I hope you see value in and take me up on it! I surely am looking forward to meeting you.</p>Unknownnoreply@blogger.com9tag:blogger.com,1999:blog-860423771829255614.post-67707259841062166362009-03-31T15:04:00.001-07:002009-03-31T15:44:08.503-07:0010 Years in one Project<p>For about ten years now I am the CTO at <a href="http://www.worldlingo.com">WorldLingo</a>. During those years I have seen quite a few people join and leaving us eventually. Below is a small snapshot of how time has passed. Obviously I am quite proud to be somewhat the rock in the sea.</p><p><br /><object width="640" height="480"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="movie" value="http://vimeo.com/moogaloop.swf?clip_id=3944920&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=00ADEF&fullscreen=1" /><embed src="http://vimeo.com/moogaloop.swf?clip_id=3944920&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=00ADEF&fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="640" height="480"></embed></object></p><p>If you like to know how the video was created, then read on.</p><p>I download the <a href="http://code.google.com/p/codeswarm/source/checkout">source</a> of the code_swarm project following the description, i.e. I used<br /><pre name="code" class="bash"><br />svn checkout http://codeswarm.googlecode.com/svn/trunk/ codeswarm-read-only<br />cd codeswarm-read-only<br />ant all</pre><br />to get the code and then ran <code>ant all</code> in its root directory:<br /><pre name="code" class="bash"><br />C:\CODESW~1>ant all<br />Buildfile: build.xml<br /><br />init:<br /> [echo] Running INIT<br /><br />build:<br /> [echo] Running BUILD<br /> [mkdir] Created dir: C:\CODESW~1\build<br /> [javac] Compiling 18 source files to C:\CODESW~1\build<br /> [copy] Copying 1 file to C:\CODESW~1\build<br /><br />jar:<br /> [echo] Running JAR<br /> [mkdir] Created dir: C:\CODESW~1\dist<br /> [jar] Building jar: C:\CODESW~1\dist\code_swarm.jar<br /><br />all:<br /> [echo] Building ALL<br /><br />BUILD SUCCESSFUL<br />Total time: 6 seconds<br /></pre><br />Note that this is on my Windows machine. After the build you will have to edit the config file to have your settings and regular expressions match your project. I really took the supplied sample config file, copied it and modified these lines:<br /><pre name="code" class="bash"><br /># This is a sample configuration file for code_swarm<br /><br />...<br /><br /># Input file<br />InputFile=data/wl-repevents.xml<br /><br />...<br /><br /># Project time per frame<br />#MillisecondsPerFrame=51600000<br /><br />...<br /><br /># Optional Method instead of MillisecondsPerFrame<br />FramesPerDay=2<br /><br />...<br /><br />ColorAssign1="wlsystem",".*wlsystem.*", 0,0,255, 0,0,255<br />ColorAssign2="www",".*www.*", 0,255,0, 0,255,0<br />ColorAssign3="docs",".*docs.*", 102,0,255, 102,0,255<br />ColorAssign4="serverconfig",".*serverconf.*", 255,0,0, 255,0,0<br /><br /># Save each frame to an image?<br />TakeSnapshots=true<br /><br />...<br /><br />DrawNamesHalos=true<br /><br />...<br /></pre><br />This is just adjusting the labels and turning on the snap shots to be able to create a video at the end. I found a <a href="http://code.google.com/p/codeswarm/wiki/GeneratingAVideo">tutorial</a> that explained how to set this up.</p><p>What did not work for me is getting mencoder to work. I downloaded the MPlayer Windows installer from its <a href="http://www.mplayerhq.hu/design7/dload.html">official site</a> and although it is meant to have mencoder included it does not. Or I am blind.</p><p>So, I simply ran <br /><pre name="code" class="bash">mkdir frames<br />runrepositoryfetch.bat data\wl.config</pre> <br />to fetch the history of our repository spanning about 10 years - going from Visual SourceSafe, to CVS and currently running on Subversion. One further problem was that the output file of the above script was not named as I had previously specified in the config file, so I had to rename it like so: <br /><pre name="code" class="bash">cd data<br />ren realtime_sample1157501935.xml wl-repevents.xml</pre><br />After that I was able to use <code>run.bat data\wl.config</code> to see the full movie in real time.</p><p>With the snap shots created but me not willing to further dig into the absence of mencoder I fired up my trusted MacBookPro and used Quicktime to create the movie from an image sequence.</p><p>When Quicktime did its magic I saved the .mov file and used VisualHub to convert it to a proper video format to upload to Vimeo. And that was it really.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-860423771829255614.post-26631512328988052052009-03-16T13:11:00.000-07:002009-03-17T02:27:57.554-07:00CouchDB and CouchApp<p>As mentioned in my previous post, I wanted to see CouchDB in action and decided to "push" one of the available sample applications into it. CouchDB has a built in web server to support the REST based API CouchDB supports. The developers were smart enough to see its broader use and allow for applications to be uploaded into a database. An application is a set of static HTML files, images, and JavaScripts that can form a fully functional web application, including the data stored directly in that very same database. With CouchDB's built in replication you get a fully distributed application. Sweet! </p><p>Sure you will have to install a load balancer in front of multiple instances of CouchDB, but that is a simple engineering task. I am not sure how session handling will work though. Without a somewhat standard session handling framework it may be difficult to build state aware applications. You can save the session in the database of course and reload upon each request. Is it replicated fast enough though for random access to cluster nodes?</p><p>Back to the sample application. I chose the <a href="http://github.com/jchris/couchdb-twitter-client/tree/master">Twitter client</a> provided by <a href="http://jchrisa.net/drl/_design/sofa/_list/index/recent-posts?descending=true&limit=5">Chris Anderson</a> from the CouchDB team (btw, his blog is now hosted directly on CouchDB). I got the tar ball from the above GitHub repository and unpacked it. To get it "pushed" into a CouchDB database you need the actual <a href="http://github.com/jchris/couchapp/tree/master">CouchApp</a> as well. Download its tar ball as well and unpack it. Now we can install CouchApp first. I went with the README file provide and tried the Ruby version, installing it as a gem:<br /><pre name="code" class="bash"><br />$ sudo gem update --system<br />$ sudo gem install couchapp<br /></pre><br />While that installed fine, the syntax that you then find for example for the application to be pushed into the database does not match with the Ruby version of CouchApp. After talking to Chris on the IRC channel he advised uninstalling the Ruby version and rather use the Python based version. I uninstalled the gem and ran:<br /><pre name="code" class="bash"><br />$ sudo easy_install couchapp<br /></pre><br />Now the syntax of <code>couchapp</code> did match and I was ready to upload the Twitter sample application - or was I?</p><p>Not so fast though, every attempt to upload the application resulted in partial failures. I compared what I had in the uploaded database with what Chris had. His had the attachments needed, mine did not. The database has been created but half of the files were missing. After a quick chat with Chris again we realized that he is using the trunk version of CouchDB, I was using the latest release. Mine was simply outdated and was missing the new application extensions. Here are the next steps I had to run:<br /><pre name="code" class="bash"><br />$ cd /downloads<br />$ svn co http://svn.apache.org/repos/asf/couchdb/trunk couchdb<br />$ cd couchdb/<br />$ ./bootstrap<br />$ ./configure<br />$ make && sudo make install<br />$ sudo -i -u couchdb couchdb<br /></pre>and to push the application<br /><pre name="code" class="bash"><br />$ cd ../jchris-couchdb-twitter-client-6bee14ae1b3525d56d77dd9c114002582dc0abe8<br />$ couchapp push http://localhost:5984/test-twitter<br /></pre><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_Cib_A77V54U/Sb9mWtSmdXI/AAAAAAAAACA/oE3LRxwNNrQ/s1600-h/cdb-design.png"><img style="float:right; margin:0 0 10px 10px;cursor:pointer; cursor:hand;width: 200px; height: 174px;" src="http://2.bp.blogspot.com/_Cib_A77V54U/Sb9mWtSmdXI/AAAAAAAAACA/oE3LRxwNNrQ/s200/cdb-design.png" border="0" alt="" id="BLOGGER_PHOTO_ID_5314078625621243250" /></a><br />As expected, the application push did succeed now. I was able to see all the files in the newly created database. The screen shot shows the design view of the new "test-twitter" database. Everything you need to serve the application is there, even the favicon.ico to display in the browsers address bar. If you click on the "index.html" the newly uploaded application is started and after logging into Twitter I had everything running as I wanted it.<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_Cib_A77V54U/Sb9o-9wK5gI/AAAAAAAAACQ/aQCw8_mfnJA/s1600-h/cdb-twitter.png"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 200px; height: 156px;" src="http://3.bp.blogspot.com/_Cib_A77V54U/Sb9o-9wK5gI/AAAAAAAAACQ/aQCw8_mfnJA/s200/cdb-twitter.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5314081516258256386" /></a><br />Here is the a screen shot of the application running. This is really great. I just have to figure out now for myself how to use it either for work or privately. So many choices - but only 24 hours in a day. <br /></p>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-860423771829255614.post-68185311425871639432009-03-15T07:28:00.000-07:002009-03-15T12:13:33.510-07:00Erlang and CouchDB on MacOS<p>As I mentioned before I am looking into how I could use Erlang in our own efforts. Sure, it is not the silver bullet for all problems ("Can it make coffee?") and all the hype on the developer ether. But it is certainly built to be the basis of large concurrent systems, for example Facebook's <a href="http://www.facebook.com/notes.php?id=9445547199">chat system</a>. Or the below mentioned Amazon Dynamo clone called <a href="http://github.com/cliffmoon/dynomite/tree/master">Dynomite</a>.</p><p>For starters I purchased "The Pragmatic Programmers" screen case <a href="http://www.pragprog.com/screencasts/v-kserl/erlang-in-practice">Erlang in Practise</a> with Kevin Smith. I have to say, it is worth every cent! I did watch it on a flight from Munich to Las Vegas and it made the hours literally "fly by". There is something really cool about seeing a program being developed in front of your eyes and re-written many times to explain more advanced concepts as you go along. Highly recommended. </p><p>Now I have a background in Prolog and Lisp so getting into the Erlang way of doing things was not too difficult. Of course, the difficult yet again is how to use it for something own. I decided to first build Erlang on my MacBook Pro and then try an Erlang based system to see it working. Building Erlang on MacOS is described in <a href="http://tim.dysinger.net/2007/12/20/compiling-erlang-on-mac-os-x-leopard-from-scratch/">here</a> but overall it is a simple "wget && tar -zxvf && ./configure && make" obstacle course with a few hickups thrown in for good measure. As the post describes, you first need to install XCode from either the supplied MacOS disks or by downloading it from the net - easy peasy. Next is to build libgd. The above post has a link to the <a href="http://www.libgd.org/DOC_Compiling_GD_on_Mac_OS_X_HOWTO">details</a> required to build libgd on MacOS. It first required downloading all the required library tar balls and then running the following commands: <br /><pre name="code" class="bash"><br />$ cd /downloads/<br />$ tar -zxvf zlib-1.2.3.tar.gz <br />$ tar -zxvf gd-2.0.35.tar.gz <br />$ tar -zxvf freetype-2.3.8.tar.gz <br />$ tar -zxvf jpegsrc.v6b.tar.gz <br />$ tar -zxvf libpng-1.2.34.tar.gz <br /></pre><br />This assumes all tar balls are saved in the "/downloads" directory. Next is <code>zlib</code>:<br /><pre name="code" class="bash"><br />$ cd zlib-1.2.3 ; ./configure --shared && make && sudo make install<br />$ ./example<br /></pre><br />Then <code>libpng</code>:<br /><pre name="code" class="bash"><br />$ cd ../libpng-1.2.34<br />$ cp scripts/makefile.darwin Makefile<br />$ vim Makefile<br />$ make && sudo make install<br />$ export srcdir=.; ./test-pngtest.sh<br /></pre><br />Next is the <code>jpeg</code> library. Here I did not have to symlink the <code>libtool</code> as described in the post. I assume that is because I am on MacOS 10.5. So all I did is this:<br /><pre name="code" class="bash"><br />$ cd ../jpeg-6b/<br />$ cp /usr/share/libtool/config.sub .<br />$ cp /usr/share/libtool/config.guess .<br />$ ./configure --enable-shared<br />$ make<br />$ sudo make install<br />$ sudo ranlib /usr/local/lib/libjpeg.a<br /></pre><br />We are getting closer. The <code>freetype</code> library needs these steps, where the last line is for the subsequent <code>libgd</code> build:<br /><pre name="code" class="bash"><br />$ cd ../freetype-2.3.8<br />$ ./configure && make && sudo make install<br />$ sudo ln -s /usr/X11R6/include/fontconfig /usr/local/include<br /></pre><br />OK, now the <code>libgd</code> library:<br /><pre name="code" class="bash"><br />$ cd ../gd-2.0.35<br />$ ln -s `which glibtool` ./libtool<br />$ ./configure <br />$ make && sudo make install <br />$ ./gdtest test/gdtest.png<br /></pre><br />With this all done we can build Erlang from sources like so:<br /><pre name="code" class="bash"><br />$ cd /downloads<br />$ tar -zxvf otp_src_R12B-5.tar.gz <br />$ cd otp_src_R12B-5<br />$ ./configure --enable-hipe --enable-smp-support --enable-threads<br />$ make && sudo make install<br />$ erl<br /></pre><br />All cool, the Erlang shell starts up and is ready for action:<br /><pre name="code" class="bash"><br />$ erl<br />Erlang (BEAM) emulator version 5.6.5 [source] [smp:2] [async-threads:0] [hipe] [kernel-poll:false]<br /><br />Eshell V5.6.5 (abort with ^G)<br />1> "hello world".<br />"hello world"<br />2> <br /></pre></p><p>With this in place I decided to try CouchDB. Again, here are the steps to get it running:<br /><pre name="code" class="bash"><br />$ cd /downloads<br />$ tar -zxvf apache-couchdb-0.8.1-incubating.tar.gz <br />$ cd apache-couchdb-0.8.1-incubating<br />$ less README <br />$ sudo port install automake autoconf libtool help2man<br />$ sudo port install icu spidermonkey<br />$ ./configure <br />$ make<br />$ sudo make install<br />$ sudo -i -u couchdb couchdb<br /></pre><br />I did not have to execute this line <code>$ open /Applications/Installers/Xcode\ Tools/XcodeTools.mpkg</code>. I assume it is because I had the full XCode install done beforehand. You can also see that you need to install the <a href="http://www.macports.org/">MacPorts</a> tools. With those you will have to add a few binary packages to be able to build CouchDB. Especially the Unicode library <code>ICU</code> and the <code>spidermonkey</code> C based JavaScript library provided by the Mozilla organization. The last line starts up the database and directing your browser to <code>http://localhost:5984/_utils/</code> allows you to see its internal UI. Now relax! ;)</p><p><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_Cib_A77V54U/Sb0g9ohb13I/AAAAAAAAAB4/7v3ZwsWsi54/s1600-h/couchdb.png"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 400px; height: 306px;" src="http://1.bp.blogspot.com/_Cib_A77V54U/Sb0g9ohb13I/AAAAAAAAAB4/7v3ZwsWsi54/s400/couchdb.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5313439378588817266" /></a>If you look closely at the screen shot you will see an inconsistency to the notes above. I will describe this in more detail in a future post about getting the sample Twitter client for CouchDB running using CouchApp.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-860423771829255614.post-89082687688590933402009-03-07T02:16:00.000-08:002009-03-07T04:30:37.691-08:00HBase vs. CouchDB in BerlinI had the pleasure of presenting our involvement with HBase at the <a href="http://newthinking-store.de/event/2009/03/5/day">4th Berlin</a> <a href="http://upcoming.yahoo.com/event/1764187">Hadoop Get Together</a>. It was organized by <a href="http://www.isabel-drost.de/">Isabel Drost</a>. Thanks again to Isabel for having me there, I thoroughly enjoyed it. First off, here are the slides:<p><br /><object id="_ds_4769422" name="_ds_4769422" width="450" height="350" type="application/x-shockwave-flash" data="http://viewer.docstoc.com/"><param name="FlashVars" value="doc_id=4769422&mem_id=602922&doc_type=ppt&fullscreen=0" /><param name="movie" value="http://viewer.docstoc.com/"/><param name="allowScriptAccess" value="always" /><param name="allowFullScreen" value="true" /></object><br /><font size="1"><a href="http://www.docstoc.com/docs/4769422/HBase--WorldLingo">HBase @ WorldLingo</a> - Get more <a href="http://www.docstoc.com/documents/technology/">Information Technology</a></font></p><p>The second talk given was by <a href="http://jan.prima.de/">Jan Lehnardt</a>, a <a href="http://couchdb.apache.org/">CouchDB</a> team member. I am looking into Erlang for the last few months to see how we could use it for our own efforts. CouchDB is one of the projects you come across when reading articles about Erlang. So it was really great to have Jan present too. </p><p>At the end of both our talks it was great to see how the questions from the audience at times tried to compare the two. So is HBase better or worse than CouchDB. Of course, you cannot compare them directly. While they share common features (well, they store data, right?) they are made to solve different problems. CouchDB is offering a schema free storage with build in replication, which can even be used to create offline clients that sync their changes with another site when they have connectivity again. One of the features puzzling me most is the ability to use it to serve your own applications to the world. You create the pages and scripts you need and push it into the database using <a href="http://groups.google.com/group/couchapp?pli=1">CouchApp</a>. Since the database already has a built-in web server it can handle your applications requirements implicitly. Nice!</p><p>I asked Jan if he had concerns about scaling this, or if it wouldn't be better to use Apache or Nginx to serve the static content. His argument was that Erlang can handle many many more concurrent request than Apache can for example. I read up on <a href="http://en.wikipedia.org/wiki/Yaws_(web_server)">Yaws</a> and saw his point. So I guess it is a question then of memory and CPU requirements. The former is apparently another strength of CouchDB, which has proven to serve thousands of concurrent requests only needed about 10MB of RAM - how awesome is that?!?! I am not sure about CPU then - but take a gander that it is equally sane.</p><p>Another Erlang project I am interested in is <a href="http://github.com/cliffmoon/dynomite/tree/master">Dynomite</a>, a Erlang based Amazon Dynamo "clone" (or rather implementation). Talking to Cliff it seems it is as awesome leveraging the Erlang OTP abilities to create something that a normal Java developer and their JRE is just not used to.</p><p>And that brings me to HBase. I told the crowd in Berlin that as of version 0.18.0 HBase is ready for anyone to get started with - given they read the Wiki to set the file handles right and a few other bits in pieces. </p><p><span style="font-weight:bold;">Note:</span> I was actually thinking about suggesting an improvement to the HBase team to have a command line check that can be invoked separately or is called when "start-hbase.sh" is called that checks a few of these common parameters and prints out warnings to the user. I know that the file handle count is printed out in the log files, but for a newbie this is a bit too deep down. What could be checked? First of the file handles being say 32K. The next thing is newer resource limits that were introduced with Hadoop for example that now need tweaking. An example is the "xciever" (sic) value. This again is documented in the Wiki, but who reads it, right? Another common issue is RAM. If the master knows the number of regions (or while it is scanning the META to determine it) it could warn if the JRE is not given enough memory. Sure, there are no hard boundaries, but better to see a <code>Warning: Found x regions. Your configured memory for the JRE seems too low for the system to run stable.</code></p><p>Back to HBase. I also told the audience that as of HBase 0.19.0 the scanning was much improved speed wise and that I am happy where we are nowadays in terms of stability and speed. Sure, it could be faster for random reads so I may be able to drop my MemCached layer. And the team is working on that. So, here's hoping that we will see the best HBase ever in the upcoming version. I for myself am 100% sure that the HBase guys can deliver - they have done so in the past and will now as well. All I can say - give it a shot!</p><p>So, CouchDB is lean and mean while HBase is a resource hog from my experience. But it is also built to scale to Petabyte size data. With CouchDB, you would have to add sharding on top of it including all the issues that come with it, for example rebalancing, fail-over, recovery, adding more servers and so on. For me HBase is the system of choice - for this particular problem. That does not mean I could use CouchDB, or even Erlang for that matter, in a separate area. Until then I will keep my eyes very close in this exciting (though in case of Erlang not new!) technology. May the open-source projects are rule and live long and prosper!</p>Unknownnoreply@blogger.com5tag:blogger.com,1999:blog-860423771829255614.post-37288207546828911862009-02-21T18:13:00.000-08:002009-02-22T10:42:10.799-08:00Mini Local HBase ClusterI am trying to get a local setup going where I have everything I need on my PC - heck even on my MacBookPro. I use Eclipse and Java to develop, so that is easy. I also use <a href="http://www.danga.com/memcached/">Memcached</a> and there is a nice <a href="http://trac.macports.org/browser/trunk/dports/sysutils/memcached/Portfile">MacPorts</a> version for it available. But what I also need is a working Hadoop/HBase cluster! <br /><br />At work we have a few of these, large and small, but they are either in production or simply to complex to use them for day to day testing. Especially when you try to debug a MapReduce job or code talking directly to HBase. I found that the excellent HBase team had already a class in place that is used to set up the JUnit tests they run. And the same goes for Hadoop. So I set out to extract the bare essentials if you will to create a tiny HBase cluster running on a tiny Hadoop distributed filesystem. <br /><br />After a couple of issues that had to be resolved the below class is my "culmination" of sweet cluster goodness ;)<br /><pre name="code" class="java"><br />/* File: MiniLocalHBase.java<br /> * Created: Feb 21, 2009<br /> * Author: Lars George<br /> *<br /> * Copyright (c) 2009 larsgeorge.com<br /> */<br /><br />package com.larsgeorge.hadoop.hbase;<br /><br />import java.io.IOException;<br />import org.apache.hadoop.fs.FileSystem;<br />import org.apache.hadoop.fs.Path;<br />import org.apache.hadoop.hbase.HBaseConfiguration;<br />import org.apache.hadoop.hbase.HConstants;<br />import org.apache.hadoop.hbase.MiniHBaseCluster;<br />import org.apache.hadoop.hbase.util.FSUtils;<br />import org.apache.hadoop.hdfs.MiniDFSCluster;<br /><br />/**<br /> * Starts a small local DFS and HBase cluster.<br /> *<br /> * @author Lars George<br /> */<br />public class MiniLocalHBase {<br /><br /> static HBaseConfiguration conf = null;<br /> static MiniDFSCluster dfs = null;<br /> static MiniHBaseCluster hbase = null;<br /> <br /> /**<br /> * Main entry point to this class. <br /> *<br /> * @param args The command line arguments.<br /> */<br /> public static void main(String[] args) {<br /> try {<br /> int n = args.length > 0 && args[0] != null ? <br /> Integer.parseInt(args[0]) : 4;<br /> conf = new HBaseConfiguration();<br /> dfs = new MiniDFSCluster(conf, 2, true, (String[]) null);<br /> // set file system to the mini dfs just started up<br /> FileSystem fs = dfs.getFileSystem();<br /> conf.set("fs.default.name", fs.getUri().toString()); <br /> Path parentdir = fs.getHomeDirectory();<br /> conf.set(HConstants.HBASE_DIR, parentdir.toString());<br /> fs.mkdirs(parentdir);<br /> FSUtils.setVersion(fs, parentdir);<br /> conf.set(HConstants.REGIONSERVER_ADDRESS, HConstants.DEFAULT_HOST + ":0");<br /> // disable UI or it clashes for more than one RegionServer<br /> conf.set("hbase.regionserver.info.port", "-1");<br /> hbase = new MiniHBaseCluster(conf, n);<br /> // add close hook<br /> Runtime.getRuntime().addShutdownHook(new Thread() {<br /> public void run() {<br /> hbase.shutdown();<br /> if (dfs != null) {<br /> try {<br /> FileSystem fs = dfs.getFileSystem();<br /> if (fs != null) fs.close();<br /> } catch (IOException e) {<br /> System.err.println("error closing file system: " + e);<br /> }<br /> try {<br /> dfs.shutdown();<br /> } catch (Exception e) { /*ignore*/ }<br /> }<br /> }<br /> } );<br /> } catch (Exception e) {<br /> e.printStackTrace();<br /> }<br /> } // main<br /><br />} // MiniLocalHBase<br /></pre><br />The critical part for me was that if you wanted to be able to start more than one region server you have to disable the UI of each of these region servers or they will fail trying to bind the same info port, usually 60030. <br /><br />I also added a small shutdown hook so that when you quit the process it will shut down nicely and keep the data in such a condition that you can restart the local again later on for further testing. Otherwise you may end up having to redo the file system - no biggie I guess, but hey why not? You can specify the number of RegionServer's being started on the command line. It defaults to 4 in my sample code above. Also, you do not need any <code>hbase-site.xml</code> or <code>hadoop-site.xml</code> to set anything else. All required settings are hardcoded to start the different servers in separate threads. You can of course add one and tweak further settings - just keep in mind that the ones hardcoded in the code cannot be reassigned by the external XML settings files. You would have to move those directly into the code.<br /><br />To start this mini cluster you can either run this from within Eclipse for example, which makes it really easy since all the required libraries are in place, or you start it from the command line. This could work like so:<br /><pre name="code" class="bash"><br />hadoop$ java -Xms512m -Xmx512m -cp bin:lib/hadoop-0.19.0-core.jar:lib/hadoop-0.19.0-test.jar:lib/hbase-0.19.0.jar:lib/hbase-0.19.0-test.jar:lib/commons-logging-1.0.4.jar:lib/jetty-5.1.4.jar:lib/servlet-api.jar:lib/jetty-ext/jasper-runtime.jar:lib/jetty-ext/jsp-api.jar:lib/jetty-ext/jasper-compiler.jar:lib/jetty-ext/commons-el.jar com.larsgeorge.hadoop.hbase.MiniLocalHBase<br /></pre><br />What I did is create a small project, have the class compile into the "bin" directory and threw all Hadoop and HBase libraries into the "lib" directory. This was only for the sake of keeping the command line short. I suggest you have the classpath set already or have it point to the original locations where you have untar'ed the respective packages.<br /><br />Running it from within Eclipse let's you of course use the integrated debugging tools at hand. The next step is to follow through with what the test classes already have implemented and be able to start Map/Reduce jobs with the debugging enabled. Mind you though as the local cluster is not very powerful - even if you give it more memory than I did above. But fill it with a few hundred rows and use it to debug your code and once it runs fine, run it happily ever after on your production site.<br /><br />All the credit goes to the Hadoop and HBase teams of course, I simply gathered their code from various places.Unknownnoreply@blogger.com4tag:blogger.com,1999:blog-860423771829255614.post-85177361070781658272009-02-05T10:21:00.001-08:002009-02-05T10:50:05.756-08:00Apache fails on SemaphoresIn the last few years I had twice an issue with our Apache web servers where all of a sudden they would crash and not start again. While there are obvious <a href="http://www.cyberciti.biz/faq/troubleshooting-apache-webserver-will-not-restart-start/">reasons</a> in case the configuration is screwed up there are also cases where you simply do not know why it would not restart. There is enough drive space, RAM, no other processes running locking the port (even checked with lsof). <br /><br />All you get is an error message in the log saying:<br /><br /><code>[Fri May 21 15:34:22 2008] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock<br />Configuration Failed </code><br /><br />After some digging the issue was that all semaphores were used up and had to be deleted first. Here is a script I use to do that:<br /><pre name="code" class="bash">echo "Semaphores found: "<br />ipcs -s | awk '{ print $2 }' | wc -l<br />ipcs -s | awk '{ print $2 }' | xargs -n 1 ipcrm sem<br />echo "Semaphores found after removal: "<br />ipcs -s | awk '{ print $2 }' | wc -l</pre><br />Sometimes you really wonder what else could go wrong.Unknownnoreply@blogger.com0