I have a Maven project compiled with Java 7 uses Spring IoC that suppose to test the use in hbase-client jar.
The dependencies are as follows:
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>1.0.0-cdh5.5.4</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-server</artifactId>
<version>1.0.0-cdh5.5.4</version>
</dependency>
I have a test (using JUnit) that creates a local cluster, create a table and load some data, then client is connecting to cluster, execute lookup and perform shutdown to the cluster, then tries to startup the cluster again to check the client's re-connection mechanism.
The problem is, that during the startup of the cluster after being shutdown, it gets an exception, that after long researching, looks like the shutdown process hasn't completed successfully.
Any help to figure out how to shutdown properly the cluster would be great.
The code context:
@RunWith(SpringJUnit4ClassRunner.class)
public class TestHBaseUserOfflineReconnection
{
@Value("${userTableName}")
private static String userTableName = "TestTable";
@Autowired
@Qualifier("hbaseUserOfflineReconnectDao")
private UserDao userOfflineReconnectDao;
@Autowired
private DemographicBenchmark benchmark;
private Table htable;
private static LocalHBaseCluster hbaseCluster;
private static MiniZooKeeperCluster zooKeeperCluster;
private static Configuration configuration;
static Connection conn = null;
@BeforeClass
public static void setup() throws IOException, InterruptedException
{
// delete the default local folder for that HBase stores its files
String userName = System.getProperty("user.name");
FileUtils.deleteDirectory(new File("/tmp/hbase-" + userName));
initHbase();
}
public static void initHbase() throws IOException, InterruptedException
{
configuration = HBaseConfiguration.create();
zooKeeperCluster = new MiniZooKeeperCluster(configuration);
zooKeeperCluster.setDefaultClientPort(2181);
zooKeeperCluster.startup(new File("target/zookeepr-" + System.currentTimeMillis()));
hbaseCluster = new LocalHBaseCluster(configuration, 1);
hbaseCluster.startup();
}
@Before
public void initeHTable() throws IOException
{
configuration.set("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
conn = ConnectionFactory.createConnection(configuration);
HTableDescriptor table = new HTableDescriptor(TableName.valueOf(userTableName));
table.addFamily(new HColumnDescriptor("cf"));
conn.getAdmin().createTable(table);
htable = conn.getTable(TableName.valueOf(userTableName));
}
public static void shutdown() throws IOException
{
hbaseCluster.shutdown();
hbaseCluster.waitOnMaster(0);
zooKeeperCluster.shutdown();
}
@Test
public void testHBaseReconnection() throws IOException, TkException, InterruptedException
{
// do some lookups with the client, and all goes well..
shutdown();
initHbase(); // HERE's I GET THE EXCEPTION
// some more code...
shutdown(); // after test finished, closing the cluster
}
}
The exception I get:
ERROR 2016-07-13 16:48:03,849
[B.defaultRpcServer.handler=4,queue=1,port=46727]
org.apache.hadoop.hbase.master.MasterRpcServices: Region server
localhost,44545,1468417682471 reported a fatal error: ABORTING region
server localhost,44545,1468417682471: Unhandled: Region server startup
failed Cause: java.io.IOException: Region server startup failed at
org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:2827)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1317)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:852)
at java.lang.Thread.run(Thread.java:745) Caused by:
org.apache.hadoop.metrics2.MetricsException: Metrics source
RegionServer,sub=Server already exists! at
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:135)
at
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:112)
at
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:228)
at
org.apache.hadoop.hbase.metrics.BaseSourceImpl.(BaseSourceImpl.java:75)
at
org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceImpl.(MetricsRegionServerSourceImpl.java:66)
at
org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceImpl.(MetricsRegionServerSourceImpl.java:58)
at
org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceFactoryImpl.createServer(MetricsRegionServerSourceFactoryImpl.java:46)
at
org.apache.hadoop.hbase.regionserver.MetricsRegionServer.(MetricsRegionServer.java:38)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1301)
... 2 more
Copyright License:
Author:「Avihoo Mamka」,Reproduced under the CC 4.0 BY-SA copyright license with link to original source & disclaimer.
Link to:https://stackoverflow.com/questions/38353610/hbase-local-cluster-failed-to-startup-after-shutdown-in-java-client