`
leongfans
  • 浏览: 85475 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论

Hadoop源码解读-Http服务器Jetty的使用

阅读更多

 

Hadoop内嵌了Http服务器Jetty,主要有以下两方面的作用

1、Web访问接口,用于展示Hadoop的内部状态

2、参与Hadoop集群的运行和管理

 

以Namenode为例

Namenode通过

startHttpServer(conf);

 来启动HttpServer(Jetty),具体代码如下

          httpServer = new HttpServer("hdfs", infoHost, infoPort, 
              infoPort == 0, conf, 
              SecurityUtil.getAdminAcls(conf, DFSConfigKeys.DFS_ADMIN));

  深入HttpServer可以看到,上面的代码是将HADOOP_HOME\webapp下面的hdfs目录作为了jetty的默认Context(datanode就用datanode目录,jobtracker为job目录,tasktracker为task目录,secondarynamenode为secondary目录)

    webAppContext = new WebAppContext();
    webAppContext.setDisplayName("WepAppsContext");
    webAppContext.setContextPath("/");
    webAppContext.setWar(appDir + "/" + name);
    webAppContext.getServletContext().setAttribute(CONF_CONTEXT_ATTRIBUTE, conf);
    webAppContext.getServletContext().setAttribute(ADMINS_ACL, adminsAcl);
    webServer.addHandler(webAppContext);

 同时添加了对Log和webapps下面static资源(css、js、pic)的访问

  protected void addDefaultApps(ContextHandlerCollection parent,
      final String appDir) throws IOException {
    // set up the context for "/logs/" if "hadoop.log.dir" property is defined. 
    String logDir = System.getProperty("hadoop.log.dir");
    if (logDir != null) {
      Context logContext = new Context(parent, "/logs");
      logContext.setResourceBase(logDir);
      logContext.addServlet(AdminAuthorizedServlet.class, "/");
      logContext.setDisplayName("logs");
      setContextAttributes(logContext);
      defaultContexts.put(logContext, true);
    }
    // set up the context for "/static/*"
    Context staticContext = new Context(parent, "/static");
    staticContext.setResourceBase(appDir + "/static");
    staticContext.addServlet(DefaultServlet.class, "/*");
    staticContext.setDisplayName("static");
    setContextAttributes(staticContext);
    defaultContexts.put(staticContext, true);
  }

以及一些状态信息的访问

  /**
   * Add default servlets.
   */
  protected void addDefaultServlets() {
    // set up default servlets
    addServlet("stacks", "/stacks", StackServlet.class);
    addServlet("logLevel", "/logLevel", LogLevel.Servlet.class);
    addServlet("metrics", "/metrics", MetricsServlet.class);
    addServlet("conf", "/conf", ConfServlet.class);
    addServlet("jmx", "/jmx", JMXJsonServlet.class);
  }
 

最后,返回namenode,又添加了一些namenode特有的访问接口,例如

/fsck用于文件系统的检查

/getimage是SecondaryNamenode获取image的入口

 

          httpServer.addInternalServlet("getDelegationToken", 
                                        GetDelegationTokenServlet.PATH_SPEC, 
                                        GetDelegationTokenServlet.class, true);
          httpServer.addInternalServlet("renewDelegationToken", 
                                        RenewDelegationTokenServlet.PATH_SPEC, 
                                        RenewDelegationTokenServlet.class, true);
          httpServer.addInternalServlet("cancelDelegationToken", 
                                        CancelDelegationTokenServlet.PATH_SPEC, 
                                        CancelDelegationTokenServlet.class,
                                        true);
          httpServer.addInternalServlet("fsck", "/fsck", FsckServlet.class, true);
          httpServer.addInternalServlet("getimage", "/getimage", 
              GetImageServlet.class, true);
          httpServer.addInternalServlet("listPaths", "/listPaths/*", 
              ListPathsServlet.class, false);
          httpServer.addInternalServlet("data", "/data/*", 
              FileDataServlet.class, false);
          httpServer.addInternalServlet("checksum", "/fileChecksum/*",
              FileChecksumServlets.RedirectServlet.class, false);
          httpServer.addInternalServlet("contentSummary", "/contentSummary/*",
              ContentSummaryServlet.class, false);
          httpServer.start();
      
          // The web-server port can be ephemeral... ensure we have the correct info
          infoPort = httpServer.getPort();
          httpAddress = new InetSocketAddress(infoHost, infoPort);
          conf.set("dfs.http.address", infoHost + ":" + infoPort);
          LOG.info("Web-server up at: " + infoHost + ":" + infoPort);
          return httpServer;

 

再打开hdfs目录,会发现index页面会直接转跳到dfshealth.jsp,查看web.xml

    <servlet-mapping>
        <servlet-name>org.apache.hadoop.hdfs.server.namenode.dfshealth_jsp</servlet-name>
        <url-pattern>/dfshealth.jsp</url-pattern>
    </servlet-mapping>

 dfshealth_jsp.class可以从hadoop-core-xxx.jar里面找到

 

 

分享到:
评论
2 楼 leongfans 2012-04-13  
cjnetwork 写道
dfshealth_jsp.class可以从hadoop-core-xxx.jar里面找到

我找了一下hadoop-core-1.0.1.jar,但没有发现你说这个类。。。   能不能指点一下。
目前,在Myeclipse中导入了hadoop的相关项目,但启动namenode的时候,出了jetty启动报错
java.lang.ClassNotFoundException: org.apache.hadoop.hdfs.server.namenode.dfshealth_jsp
	at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:252)

等之外,其他还算正常。


1.0.1版本不是很清楚,我这儿说的是cloudera cdh3u2版本的
dfshealth_jsp.java是ant脚本生成的,你需要先运行一下ant
然后把build目录下面的src里面的java文件也加到classpath里面去
1 楼 cjnetwork 2012-04-09  
dfshealth_jsp.class可以从hadoop-core-xxx.jar里面找到

我找了一下hadoop-core-1.0.1.jar,但没有发现你说这个类。。。   能不能指点一下。
目前,在Myeclipse中导入了hadoop的相关项目,但启动namenode的时候,出了jetty启动报错
java.lang.ClassNotFoundException: org.apache.hadoop.hdfs.server.namenode.dfshealth_jsp
	at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:252)

等之外,其他还算正常。

相关推荐

Global site tag (gtag.js) - Google Analytics