HMS

相关类图如下:

上图颜色分类

  • 绿色部分是 hive 的
  • 橙色部分是 thrift 框架自动生成的代码
  • 白色部分是 JDK 的

右边是 hive meta-store client,兼容了这个客户端协议的框架,如 spark,会通过 hive meta-store 协议连接过来
左边是服务端的实现,主要继承自ThriftHiveMetastore.Iface,这个类包含了很多操作,CURD库、表,函数的等等

HMSHandler实现了这个接口,然后调用RawStre去一个具体数据源来获取信息,或者创建信息
RowStore的实现类ObjectStore则调用了javax.jdo去连接一个真实的数据库,完成此操作

jdo 是 ORM 的实现,比 JDBC 更高层一些,这里会有一些表的连接操作,然后会转为更底层的 SQL
hive 使用的 ORM 框架为 DataNucleus
几个包

  • org.apache.thrift.protocol 这个是底层 thrift RPC 相关类,会做序列化操作
  • org.apache.hadoop.hive.metastore.api,这是Iface里面库、表会引用到这些RPC类,这些类的一些数据结构定义又引用了更下沉的thrift类
  • org.apache.hadoop.hive.metastore.model,将数据中的数据读取封装成对象,再将这个对象转为thrift RPC中的对象

另外 HMS 也有直接使用SQL 链接数据库
ObjectStore引用了MetaStoreDirectSql,这个类中就包含了很多 sql 语句,不通过 ORM 框架,直接访问
可能是 hive 层做的一些优化

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
private List<Partition> getPartitionsViaSqlFilterInternal(
    String queryText =
        "select " + PARTITIONS + ".\"PART_ID\" from " + PARTITIONS + ""
      + "  inner join " + TBLS + " on " + PARTITIONS + ".\"TBL_ID\" = " + TBLS + ".\"TBL_ID\" "
      + "    and " + TBLS + ".\"TBL_NAME\" = ? "
      + "  inner join " + DBS + " on " + TBLS + ".\"DB_ID\" = " + DBS + ".\"DB_ID\" "
      + "     and " + DBS + ".\"NAME\" = ? "
      + join(joinsForFilter, ' ')
      + " where " + DBS + ".\"CTLG_NAME\" = ? "
      + (StringUtils.isBlank(sqlFilter) ? "" : (" and " + sqlFilter)) + orderForFilter;
}

Hive Client

自定义hiveserver设计
相关类图

上图颜色分类

  • 蓝色部分是 自动生成的代码
  • 深灰色是 client实现
  • 浅灰色是 服务端逻辑
  • 绿色部分是 service 层逻辑

服务端和客户端 都实现了 Iface 逻辑,也就是 thrift RPC 协议
服务端有不同的传输实现类,binary 和 http
业务逻辑调用绿色部分,再委托给 SessionManager获取一个session
然后执行具体的 sql 任务

MetaCat

相关操作

架构如如下:

大致分类

  • metacat-controller,CURD 元数据的操作
  • partition-controller,分区操作,mysql 不支持这样的操作
  • tag-controller,可以给表打标签
  • 其他,如创建 meta-cat试图等,mysql不支持

查询 catalog

1
http://192.168.1.2:8080/mds/v1/catalog

结果

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
[{
"catalogName": "mysql-57-db",
"connectorName": "mysql",
"clusterDto": {
"name": null,
"type": "mysql",
"account": null,
"accountId": null,
"env": null,
"region": null
}
}]

查询 数据库

1
http://192.168.1.2:8080/mds/v1/catalog/mysql-57-db/database/my_db

结果

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
{
"dateCreated": null,
"definitionMetadata": null,
"lastUpdated": null,
"name": {
"catalogName": "mysql-57-db",
"databaseName": "my_db",
"databaseName": "my_db",
"qualifiedName": "mysql-57-db/my_db"
},
"tables": ["t1", "t2", "t3"],
"type": "mysql",
"metadata": null,
"uri": null
}

查询表

1
http://192.168.1.2:8080/mds/v1/catalog/mysql-57-db/database/my_db/table/t1

结果

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
{
	"audit": {
		"createdBy": null,
		"createdDate": 1701100800000,
		"lastModifiedBy": null,
		"lastModifiedDate": 1701100800000
	},
	"dataMetadata": null,
	"definitionMetadata": null,
	"fields": [{
		"comment": "",
		"name": "day",
		"partition_key": false,
		"pos": 0,
		"source_type": "VARCHAR(255)",
		"type": "varchar(255)",
		"jsonType": {
			"type": "varchar",
			"length": 255
		},
		"isNullable": true,
		"size": 255,
		"defaultValue": null,
		"isSortKey": null,
		"isIndexKey": null
	}, {
		"comment": "",
		"name": "year",
		"partition_key": false,
		"pos": 1,
		"source_type": "VARCHAR(255)",
		"type": "varchar(255)",
		"jsonType": {
			"type": "varchar",
			"length": 255
		},
		"isNullable": true,
		"size": 255,
		"defaultValue": null,
		"isSortKey": null,
		"isIndexKey": null
	}, {
		"comment": "",
		"name": "month",
		"partition_key": false,
		"pos": 2,
		"source_type": "VARCHAR(255)",
		"type": "varchar(255)",
		"jsonType": {
			"type": "varchar",
			"length": 255
		},
		"isNullable": true,
		"size": 255,
		"defaultValue": null,
		"isSortKey": null,
		"isIndexKey": null
	}],
	"metadata": null,
	"name": {
		"catalogName": "mysql-57-db",
		"databaseName": "db_sd",
		"qualifiedName": "mysql-57-db/my_db/t2",
		"tableName": "day_time_table"
	},
	"serde": null,
	"view": null,
	"partition_keys": [],
	"dataExternal": false
}

mysql 插件的配置信息:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
connector.name=mysql
metacat.schema.list-views-with-tables=true
metacat.cache.enabled=true
metacat.interceptor.enabled=false

javax.jdo.option.name=mysql57-hello
javax.jdo.option.url=jdbc:mysql://192.168.1.2:3306/?useUnicode=true
    &characterEncoding=latin1&autoReconnect=true&rewriteBatchedStatements=true
javax.jdo.option.username=root
javax.jdo.option.driverClassName=com.mysql.jdbc.Driver
javax.jdo.option.password=123456
。。。。。。

给 tomcat 增加几个 -D 参数

  • -Dmetacat.usermetadata.config.location=/usermetadata.properties的具体路径
  • -Dmetacat.plugin.config.location=catalog的具体路径

HMS 优化

相关类如下:

颜色分类

  • 灰色部分,HMS相关,其中深灰色是自动生成的代码
  • 红色部分,web controller 相关的代码
  • 绿色部分,service 相关逻辑
  • 蓝色部分,操作 HMS 相关的类
  • 黄色部分,外部依赖

Iface 是最核心类,metacat 继承了这个类,相当于是实现了 HMS 的server端RPC协议
其中一般操作是直接调用了 controller 的类,所以 RPC 和 http 的逻辑实际是统一的
有一些特殊的 controller,如 Tag、metadata、Search,这些会调用外部类
如搜索会直接调用 ES,tag 和 metadata 会调用 MySQL,所以需要一个外部的 ES,MySQL 来支持这些功能

蓝色部分是操作 HMS 相关的类,分为表、库、partition 三个类
这里有两种情况

  • 浅色部分的实现类,直接调用了 Hive metastore client 来实现的
  • 深蓝色部分,绕过了Hice Client,直接链接了底层的数据库,通过SQL 来交互的

所以 metacat 对HMS 的优化,可以理解为

  • 将ORM操作,转为了直接的 JDBC操作
  • 将ORM生成的多表关联,改为了很多单表查询,再加载到内存中做管理,减轻了 数据库端的计算压力
  • 本质上相当于对 SQL 做优化

Spark和HMS

相关类图如下
灰色的是 spark v1 体系的类
黄色部分是 外部catalog 相关类
v1 中包含了两个 catalog

  • InMemoryCatalog
  • HiveExternalCatalog

HiveExternalCatalog 调用了 IMetaStoreClient 实现类
也就是通过 hive MS client 向服务端发起了 RPC 请求实现 catalog 查找的

这里使用了多版本机制

1
2
3
4
5
6
7
8
  private val shim = version match {
    case hive.v2_0 => new Shim_v2_0()
    case hive.v2_1 => new Shim_v2_1()
    case hive.v2_2 => new Shim_v2_2()
    case hive.v2_3 => new Shim_v2_3()
    case hive.v3_0 => new Shim_v3_0()
    case hive.v3_1 => new Shim_v3_1()
  }

MetaCat 一些优化

MetaCat对HMS做的优化SQL

这里不通过 HMS,而是通过 JDBC,直接连底层的数据源
DirectSqlDatabase 的主要 SQL 如下:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
    private static class SQL {
        static final String GET_DATABASE_ID =
            "select d.db_id from DBS d where d.name=?";
        static final String GET_DATABASE =
            "select d.desc, d.name, d.db_location_uri uri, d.owner_name owner from DBS d where d.db_id=?";
        static final String GET_DATABASE_PARAMS =
            "select param_key, param_value from DATABASE_PARAMS where db_id=?";
        static final String UPDATE_DATABASE_PARAMS =
            "update DATABASE_PARAMS set param_value=? WHERE db_id=? and param_key=?";
        static final String INSERT_DATABASE_PARAMS =
            "insert into DATABASE_PARAMS(db_id,param_key,param_value) values (?,?,?)";
        static final String UPDATE_DATABASE =
            "UPDATE DBS SET db_location_uri=?, owner_name=? WHERE db_id=?";
    }

DirectSqlGetPartition 的主要 SQL 如下:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
    private static class SQL {
        static final String SQL_GET_PARTITIONS_WITH_KEY_URI =
            //Add p.part_id as id to allow pagination using 'order by id'
            "select p.part_id as id, p.PART_NAME as name, p.CREATE_TIME as dateCreated, sds.location uri"
                + " from PARTITIONS as p join TBLS as t on t.TBL_ID = p.TBL_ID "
                + "join DBS as d on t.DB_ID = d.DB_ID join SDS as sds on p.SD_ID = sds.SD_ID";
        static final String SQL_GET_PARTITIONS_URI =
            "select p.part_id as id, sds.location uri"
                + " from PARTITIONS as p join TBLS as t on t.TBL_ID = p.TBL_ID "
                + "join DBS as d on t.DB_ID = d.DB_ID join SDS as sds on p.SD_ID = sds.SD_ID";

        static final String SQL_GET_PARTITIONS_WITH_KEY =
            "select p.part_id as id, p.PART_NAME as name from PARTITIONS as p"
                + " join TBLS as t on t.TBL_ID = p.TBL_ID join DBS as d on t.DB_ID = d.DB_ID";
        static final String SQL_GET_PARTITIONS =
            "select p.part_id as id, p.PART_NAME as name, p.CREATE_TIME as dateCreated,"
                + " sds.location uri, sds.input_format, sds.output_format,"
                + " sds.sd_id, s.serde_id, s.slib from PARTITIONS as p"
                + " join TBLS as t on t.TBL_ID = p.TBL_ID join DBS as d"
                + " on t.DB_ID = d.DB_ID join SDS as sds on p.SD_ID = sds.SD_ID"
                + " join SERDES s on sds.SERDE_ID=s.SERDE_ID";
        static final String SQL_GET_PARTITION_NAMES_BY_URI =
            "select p.part_name partition_name,t.tbl_name table_name,d.name schema_name,"
                + " sds.location from PARTITIONS as p join TBLS as t on t.TBL_ID = p.TBL_ID"
                + " join DBS as d on t.DB_ID = d.DB_ID join SDS as sds on p.SD_ID = sds.SD_ID where";
        static final String SQL_GET_PARTITION_PARAMS =
            "select part_id, param_key, param_value from PARTITION_PARAMS where 1=1";
        static final String SQL_GET_SD_PARAMS =
            "select sd_id, param_key, param_value from SD_PARAMS where 1=1";
        static final String SQL_GET_SERDE_PARAMS =
            "select serde_id, param_key, param_value from SERDE_PARAMS where 1=1";
        static final String SQL_GET_PARTITION_KEYS =
            "select pkey_name, pkey_type from PARTITION_KEYS as p "
                + "join TBLS as t on t.TBL_ID = p.TBL_ID join DBS as d"
                + " on t.DB_ID = d.DB_ID where d.name=? and t.tbl_name=? order by integer_idx";
        static final String SQL_GET_PARTITION_COUNT =
            "select count(*) count from PARTITIONS as p"
                + " join TBLS as t on t.TBL_ID = p.TBL_ID join DBS as d on t.DB_ID = d.DB_ID"
                + " where d.NAME = ? and t.TBL_NAME = ?";

        //audit table, takes precedence in case there are parititons overlap with the source
        static final String SQL_GET_AUDIT_TABLE_PARTITION_COUNT =
            "select count(distinct p1.part_name) count from PARTITIONS as p1 "
                + "join TBLS as t1 on t1.TBL_ID = p1.TBL_ID join DBS as d1 on t1.DB_ID = d1.DB_ID "
                + "where ( d1.NAME = ?  and t1.TBL_NAME = ? ) "
                + "or ( d1.NAME = ? and t1.TBL_NAME = ?)";

        // using nest order https://stackoverflow.com/questions/6965333/mysql-union-distinct
        static final String SQL_GET_AUDIT_TABLE_PARTITION_KEYS =
            "select pkey_name, pkey_type from ("
                + "(select pkey_name, pkey_type, integer_idx from PARTITION_KEYS as p1 "
                + "join TBLS as t1 on t1.TBL_ID = p1.TBL_ID join DBS as d1 "
                + "on t1.DB_ID = d1.DB_ID where d1.NAME = ? and t1.TBL_NAME = ? "
                + ") UNION "
                + "(select pkey_name, pkey_type, integer_idx from PARTITION_KEYS as p2 "
                + "join TBLS as t2 on t2.TBL_ID = p2.TBL_ID join DBS as d2 "
                + "on t2.DB_ID = d2.DB_ID where d2.NAME = ? and t2.TBL_NAME = ?)) as pp order by integer_idx";

        //select the partitions not in audit table
        static final String SQL_NOT_IN_AUTDI_TABLE_PARTITIONS =
            " and p.PART_NAME not in ("
                + " select p1.PART_NAME from PARTITIONS as p1"
                + " join TBLS as t1 on t1.TBL_ID = p1.TBL_ID join DBS as d1"
                + " on t1.DB_ID = d1.DB_ID where d1.NAME = ? and t1.TBL_NAME = ? )";  //audit table
    }

DirectSqlSavePartition 的主要 SQL 如下:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
    private static class SQL {
        static final String SERDES_INSERT =
            "INSERT INTO SERDES (NAME,SLIB,SERDE_ID) VALUES (?,?,?)";
        static final String SERDES_UPDATE =
            "UPDATE SERDES SET NAME=?,SLIB=? WHERE SERDE_ID=?";
        static final String SERDES_DELETES =
            "DELETE FROM SERDES WHERE SERDE_ID in (%s)";
        static final String SERDE_PARAMS_INSERT =
            "INSERT INTO SERDE_PARAMS(PARAM_VALUE,SERDE_ID,PARAM_KEY) VALUES (?,?,?)";
        static final String SERDE_PARAMS_INSERT_UPDATE =
            "INSERT INTO SERDE_PARAMS(PARAM_VALUE,SERDE_ID,PARAM_KEY) VALUES (?,?,?) "
                + "ON DUPLICATE KEY UPDATE PARAM_VALUE=?";
        static final String SERDE_PARAMS_DELETES =
            "DELETE FROM SERDE_PARAMS WHERE SERDE_ID in (%s)";
        static final String SDS_INSERT =
            "INSERT INTO SDS (OUTPUT_FORMAT,IS_COMPRESSED,CD_ID,IS_STOREDASSUBDIRECTORIES,SERDE_ID,LOCATION, "
                + "INPUT_FORMAT,NUM_BUCKETS,SD_ID) VALUES (?,?,?,?,?,?,?,?,?)";
        static final String SDS_UPDATE =
            "UPDATE SDS SET OUTPUT_FORMAT=?,IS_COMPRESSED=?,IS_STOREDASSUBDIRECTORIES=?,LOCATION=?, "
                + "INPUT_FORMAT=? WHERE SD_ID=?";
        static final String BUCKETING_COLS_DELETES =
            "DELETE FROM BUCKETING_COLS WHERE SD_ID in (%s)";
        static final String SORT_COLS_DELETES =
            "DELETE FROM SORT_COLS WHERE SD_ID in (%s)";
        static final String SDS_DELETES =
            "DELETE FROM SDS WHERE SD_ID in (%s)";
        static final String PARTITIONS_INSERT =
            "INSERT INTO PARTITIONS(LAST_ACCESS_TIME,TBL_ID,CREATE_TIME,SD_ID,PART_NAME,PART_ID) VALUES (?,?,?,?,?,?)";
        static final String PARTITIONS_DELETES =
            "DELETE FROM PARTITIONS WHERE PART_ID in (%s)";
        static final String PARTITION_PARAMS_INSERT =
            "INSERT INTO PARTITION_PARAMS (PARAM_VALUE,PART_ID,PARAM_KEY) VALUES (?,?,?)";
        static final String PARTITION_PARAMS_INSERT_UPDATE =
            "INSERT INTO PARTITION_PARAMS (PARAM_VALUE,PART_ID,PARAM_KEY) VALUES (?,?,?) "
                + "ON DUPLICATE KEY UPDATE PARAM_VALUE=?";
        static final String PARTITION_PARAMS_DELETES =
            "DELETE FROM PARTITION_PARAMS WHERE PART_ID in (%s)";
        static final String PARTITION_KEY_VALS_INSERT =
            "INSERT INTO PARTITION_KEY_VALS(PART_ID,PART_KEY_VAL,INTEGER_IDX) VALUES (?,?,?)";
        static final String PARTITION_KEY_VALS_DELETES =
            "DELETE FROM PARTITION_KEY_VALS WHERE PART_ID in (%s)";
        static final String PARTITIONS_SELECT_ALL =
            "SELECT P.PART_ID, P.SD_ID, S.SERDE_ID FROM DBS D JOIN TBLS T ON D.DB_ID=T.DB_ID "
                + "JOIN PARTITIONS P ON T.TBL_ID=P.TBL_ID JOIN SDS S ON P.SD_ID=S.SD_ID "
                + "WHERE D.NAME=? and T.TBL_NAME=? limit %d";
        static final String PARTITIONS_SELECT =
            "SELECT P.PART_ID, P.SD_ID, S.SERDE_ID FROM DBS D JOIN TBLS T ON D.DB_ID=T.DB_ID "
                + "JOIN PARTITIONS P ON T.TBL_ID=P.TBL_ID JOIN SDS S ON P.SD_ID=S.SD_ID "
                + "WHERE D.NAME=? and T.TBL_NAME=? and P.PART_NAME in (%s)";
        static final String TABLE_SELECT =
            "SELECT T.TBL_ID, S.CD_ID FROM DBS D JOIN TBLS T ON D.DB_ID=T.DB_ID JOIN SDS S ON T.SD_ID=S.SD_ID "
                + "WHERE D.NAME=? and T.TBL_NAME=?";

    }

DirectSqlTable 的主要 SQL 如下:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
    private static class SQL {
        static final String GET_TABLE_NAMES_BY_URI =
            "select d.name schema_name, t.tbl_name table_name, s.location"
                + " from DBS d, TBLS t, SDS s where d.DB_ID=t.DB_ID and t.sd_id=s.sd_id";
        static final String EXIST_TABLE_BY_NAME =
            "select 1 from DBS d join TBLS t on d.DB_ID=t.DB_ID where d.name=? and t.tbl_name=?";
        static final String GET_TABLE_ID =
            "select t.tbl_id from DBS d join TBLS t on d.DB_ID=t.DB_ID where d.name=? and t.tbl_name=?";
        static final String TABLE_PARAM_LOCK =
            "SELECT param_value FROM TABLE_PARAMS WHERE tbl_id=? and param_key=? FOR UPDATE";
        static final String TABLE_PARAMS_LOCK =
            "SELECT param_key, param_value FROM TABLE_PARAMS WHERE tbl_id=? FOR UPDATE";
        static final String UPDATE_TABLE_PARAMS =
            "update TABLE_PARAMS set param_value=? WHERE tbl_id=? and param_key=?";
        static final String INSERT_TABLE_PARAMS =
            "insert into TABLE_PARAMS(tbl_id,param_key,param_value) values (?,?,?)";
        static final String UPDATE_SDS_LOCATION =
            "UPDATE SDS s join TBLS t on s.sd_id=t.sd_id SET s.LOCATION=? WHERE t.TBL_ID=? and s.LOCATION != ?";
        static final String UPDATE_SDS_CD = "UPDATE SDS SET CD_ID=? WHERE SD_ID=?";
        static final String DELETE_COLUMNS_OLD = "DELETE FROM COLUMNS_OLD WHERE SD_ID=?";
        static final String DELETE_COLUMNS_V2 = "DELETE FROM COLUMNS_V2 WHERE CD_ID=?";
        static final String DELETE_CDS = "DELETE FROM CDS WHERE CD_ID=?";
        static final String DELETE_PARTITION_KEYS = "DELETE FROM PARTITION_KEYS WHERE TBL_ID=?";
        static final String DELETE_TABLE_PARAMS = "DELETE FROM TABLE_PARAMS WHERE TBL_ID=?";
        static final String DELETE_TAB_COL_STATS = "DELETE FROM TAB_COL_STATS WHERE TBL_ID=?";
        static final String UPDATE_TABLE_SD = "UPDATE TBLS SET SD_ID=? WHERE TBL_ID=?";
        static final String DELETE_SKEWED_COL_NAMES = "DELETE FROM SKEWED_COL_NAMES WHERE SD_ID=?";
        static final String DELETE_BUCKETING_COLS = "DELETE FROM BUCKETING_COLS WHERE SD_ID=?";
        static final String DELETE_SORT_COLS = "DELETE FROM SORT_COLS WHERE SD_ID=?";
        static final String DELETE_SD_PARAMS = "DELETE FROM SD_PARAMS WHERE SD_ID=?";
        static final String DELETE_SKEWED_COL_VALUE_LOC_MAP = "DELETE FROM SKEWED_COL_VALUE_LOC_MAP WHERE SD_ID=?";
        static final String DELETE_SKEWED_VALUES = "DELETE FROM SKEWED_VALUES WHERE SD_ID_OID=?";
        static final String UPDATE_SDS_SERDE = "UPDATE SDS SET SERDE_ID=? WHERE SD_ID=?";
        static final String DELETE_SERDE_PARAMS = "DELETE FROM SERDE_PARAMS WHERE SERDE_ID=?";
        static final String DELETE_SERDES = "DELETE FROM SERDES WHERE SERDE_ID=?";
        static final String DELETE_SDS = "DELETE FROM SDS WHERE SD_ID=?";
        static final String DELETE_TBL_PRIVS = "DELETE FROM TBL_PRIVS WHERE TBL_ID=?";
        static final String DELETE_TBL_COL_PRIVS = "DELETE FROM TBL_COL_PRIVS WHERE TBL_ID=?";
        static final String DELETE_TBLS = "DELETE FROM TBLS WHERE TBL_ID=?";
        static final String TABLE_SEQUENCE_IDS = "select t.tbl_id, s.sd_id, s.cd_id, s.serde_id"
            + " from DBS d join TBLS t on d.db_id=t.db_id join SDS s on t.sd_id=s.sd_id"
            + " where d.name=? and t.tbl_name=?";
    }

ThriftHiveMetastore 相关函数

ThriftHiveMetastore.Iface 的所有函数

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
abort_txn
abort_txns
add_dynamic_partitions
add_foreign_key
add_index
add_master_key
add_partition
add_partitions
add_partitions_pspec
add_partitions_req
add_partition_with_environment_context
add_primary_key
add_token
alter_database
alter_function
alter_index
alter_partition
alter_partitions
alter_partitions_with_environment_context
alter_partition_with_environment_context
alter_table
alter_table_with_cascade
alter_table_with_environment_context
append_partition
append_partition_by_name
append_partition_by_name_with_environment_context
append_partition_with_environment_context
cache_file_metadata
cancel_delegation_token
check_lock
clear_file_metadata
commit_txn
compact
compact2
create_database
create_function
create_role
create_table
create_table_with_constraints
create_table_with_environment_context
create_type
delete_partition_column_statistics
delete_table_column_statistics
drop_constraint
drop_database
drop_function
drop_index_by_name
drop_partition
drop_partition_by_name
drop_partition_by_name_with_environment_context
drop_partitions_req
drop_partition_with_environment_context
drop_role
drop_table
drop_table_with_environment_context
drop_type
exchange_partition
exchange_partitions
fire_listener_event
flushCache
get_aggr_stats_for
get_all_databases
get_all_functions
get_all_tables
get_all_token_identifiers
get_config_value
get_current_notificationEventId
get_database
get_databases
get_delegation_token
get_fields
get_fields_with_environment_context
get_file_metadata
get_file_metadata_by_expr
get_foreign_keys
get_function
get_functions
get_index_by_name
get_indexes
get_index_names
get_master_keys
getMetaConf
get_next_notification
get_num_partitions_by_filter
get_open_txns
get_open_txns_info
get_partition
get_partition_by_name
get_partition_column_statistics
get_partition_names
get_partition_names_ps
get_partitions
get_partitions_by_expr
get_partitions_by_filter
get_partitions_by_names
get_partitions_ps
get_partitions_pspec
get_partitions_ps_with_auth
get_partitions_statistics_req
get_partitions_with_auth
get_partition_with_auth
get_part_specs_by_filter
get_primary_keys
get_principals_in_role
get_privilege_set
get_role_grants_for_principal
get_role_names
get_schema
get_schema_with_environment_context
get_table
get_table_column_statistics
get_table_meta
get_table_names_by_filter
get_table_objects_by_name
get_table_objects_by_name_req
get_table_req
get_tables
get_tables_by_type
get_table_statistics_req
get_token
get_type
get_type_all
grant_privileges
grant_revoke_privileges
grant_revoke_role
grant_role
heartbeat
heartbeat_txn_range
isPartitionMarkedForEvent
list_privileges
list_roles
lock
markPartitionForEvent
open_txns
partition_name_has_valid_characters
partition_name_to_spec
partition_name_to_vals
put_file_metadata
remove_master_key
remove_token
rename_partition
renew_delegation_token
revoke_privileges
revoke_role
set_aggr_stats_for
setMetaConf
set_ugi
show_compact
show_locks
unlock
update_master_key
update_partition_column_statistics
update_table_column_statistics

org.apache.hadoop.hive.metastore.api 包下的所有类

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
AbortTxnRequest
AbortTxnsRequest
AddCheckConstraintRequest
AddDefaultConstraintRequest
AddDynamicPartitions
AddForeignKeyRequest
AddNotNullConstraintRequest
AddPartitionsRequest
AddPartitionsResult
AddPrimaryKeyRequest
AddUniqueConstraintRequest
AggrStats
AllocateTableWriteIdsRequest
AllocateTableWriteIdsResponse
AlreadyExistsException
AlterCatalogRequest
AlterISchemaRequest
BasicTxnInfo
BinaryColumnStatsData
BooleanColumnStatsData
CacheFileMetadataRequest
CacheFileMetadataResult
Catalog
CheckConstraintsRequest
CheckConstraintsResponse
CheckLockRequest
ClearFileMetadataRequest
ClearFileMetadataResult
ClientCapabilities
ClientCapability
CmRecycleRequest
CmRecycleResponse
ColumnStatistics
ColumnStatisticsData
ColumnStatisticsDesc
ColumnStatisticsObj
CommitTxnRequest
CompactionRequest
CompactionResponse
CompactionType
ConfigValSecurityException
CreateCatalogRequest
CreationMetadata
CurrentNotificationEventId
Database
DataOperationType
Date
DateColumnStatsData
Decimal
DecimalColumnStatsData
DefaultConstraintsRequest
DefaultConstraintsResponse
DoubleColumnStatsData
DropCatalogRequest
DropConstraintRequest
DropPartitionsExpr
DropPartitionsRequest
DropPartitionsResult
EnvironmentContext
EventRequestType
FieldSchema
FileMetadataExprType
FindSchemasByColsResp
FindSchemasByColsRqst
FireEventRequest
FireEventRequestData
FireEventResponse
ForeignKeysRequest
ForeignKeysResponse
Function
FunctionType
GetAllFunctionsResponse
GetCatalogRequest
GetCatalogResponse
GetCatalogsResponse
GetFileMetadataByExprRequest
GetFileMetadataByExprResult
GetFileMetadataRequest
GetFileMetadataResult
GetOpenTxnsInfoResponse
GetOpenTxnsResponse
GetPrincipalsInRoleRequest
GetPrincipalsInRoleResponse
GetRoleGrantsForPrincipalRequest
GetRoleGrantsForPrincipalResponse
GetRuntimeStatsRequest
GetSerdeRequest
GetTableRequest
GetTableResult
GetTablesRequest
GetTablesResult
GetValidWriteIdsRequest
GetValidWriteIdsResponse
GrantRevokePrivilegeRequest
GrantRevokePrivilegeResponse
GrantRevokeRoleRequest
GrantRevokeRoleResponse
GrantRevokeType
HeartbeatRequest
HeartbeatTxnRangeRequest
HeartbeatTxnRangeResponse
hive_metastoreConstants
HiveObjectPrivilege
HiveObjectRef
HiveObjectType
InsertEventRequestData
InvalidInputException
InvalidObjectException
InvalidOperationException
InvalidPartitionException
ISchema
ISchemaName
LockComponent
LockLevel
LockRequest
LockResponse
LockState
LockType
LongColumnStatsData
MapSchemaVersionToSerdeRequest
Materialization
MetadataPpdResult
MetaException
NoSuchLockException
NoSuchObjectException
NoSuchTxnException
NotificationEvent
NotificationEventRequest
NotificationEventResponse
NotificationEventsCountRequest
NotificationEventsCountResponse
NotNullConstraintsRequest
NotNullConstraintsResponse
OpenTxnRequest
OpenTxnsResponse
Order
Partition
PartitionEventType
PartitionListComposingSpec
PartitionsByExprRequest
PartitionsByExprResult
PartitionSpec
PartitionSpecWithSharedSD
PartitionsStatsRequest
PartitionsStatsResult
PartitionValuesRequest
PartitionValuesResponse
PartitionValuesRow
PartitionWithoutSD
PrimaryKeysRequest
PrimaryKeysResponse
PrincipalPrivilegeSet
PrincipalType
PrivilegeBag
PrivilegeGrantInfo
PutFileMetadataRequest
PutFileMetadataResult
ReplTblWriteIdStateRequest
RequestPartsSpec
ResourceType
ResourceUri
Role
RolePrincipalGrant
RuntimeStat
Schema
SchemaCompatibility
SchemaType
SchemaValidation
SchemaVersion
SchemaVersionDescriptor
SchemaVersionState
SerDeInfo
SerdeType
SetPartitionsStatsRequest
SetSchemaVersionStateRequest
ShowCompactRequest
ShowCompactResponse
ShowCompactResponseElement
ShowLocksRequest
ShowLocksResponse
ShowLocksResponseElement
SkewedInfo
SQLCheckConstraint
SQLDefaultConstraint
SQLForeignKey
SQLNotNullConstraint
SQLPrimaryKey
SQLUniqueConstraint
StorageDescriptor
StringColumnStatsData
Table
TableMeta
TableStatsRequest
TableStatsResult
TableValidWriteIds
ThriftHiveMetastore.java
TxnAbortedException
TxnInfo
TxnOpenException
TxnState
TxnToWriteId
Type
UniqueConstraintsRequest
UniqueConstraintsResponse
UnknownDBException
UnknownPartitionException
UnknownTableException
UnlockRequest
Version
WMAlterPoolRequest
WMAlterPoolResponse
WMAlterResourcePlanRequest
WMAlterResourcePlanResponse
WMAlterTriggerRequest
WMAlterTriggerResponse
WMCreateOrDropTriggerToPoolMappingRequest
WMCreateOrDropTriggerToPoolMappingResponse
WMCreateOrUpdateMappingRequest
WMCreateOrUpdateMappingResponse
WMCreatePoolRequest
WMCreatePoolResponse
WMCreateResourcePlanRequest
WMCreateResourcePlanResponse
WMCreateTriggerRequest
WMCreateTriggerResponse
WMDropMappingRequest
WMDropMappingResponse
WMDropPoolRequest
WMDropPoolResponse
WMDropResourcePlanRequest
WMDropResourcePlanResponse
WMDropTriggerRequest
WMDropTriggerResponse
WMFullResourcePlan
WMGetActiveResourcePlanRequest
WMGetActiveResourcePlanResponse
WMGetAllResourcePlanRequest
WMGetAllResourcePlanResponse
WMGetResourcePlanRequest
WMGetResourcePlanResponse
WMGetTriggersForResourePlanRequest
WMGetTriggersForResourePlanResponse
WMMapping
WMNullablePool
WMNullableResourcePlan
WMPool
WMPoolSchedulingPolicy
WMPoolTrigger
WMResourcePlan
WMResourcePlanStatus
WMTrigger
WMValidateResourcePlanRequest
WMValidateResourcePlanResponse

org.apache.hadoop.hive.metastore.api 包下的类

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
TBase64Utils
TBinaryProtocol
TCompactProtocol
TField
TJSONProtocol
TList
TMap
TMessage
TMessageType
TMultiplexedProtocol
TProtocol
TProtocolDecorator
TProtocolException
TProtocolFactory
TProtocolUtil
TSet
TSimpleJSONProtocol
TStruct
TTupleProtocol
TType

参考