tf_1.8_xla_doc
Todo List
Member
tensorflow::tfcompile::Main
(const MainFlags &flags)
3. Generate output (object, header, etc)
Member
tensorflow::XlaCompiler::CompileGraph
(const CompileOptions &options, string const &name, std::unique_ptr< Graph > graph, const std::vector< Argument > &args, CompilationResult *result)
Rest of the code
Class
xla::AlgebraicSimplifier
Contains lots of code including broadcast sementics
Member
xla::anonymous_namespace{dot_decomposer.cc}::DecomposeBatchDot
(HloInstruction *dot)
Opinion: It seems like decompose high-rank dot operation to low-rank. Is based on mathematical approach, and can be applied for computer as well.
Class
xla::BatchNormExpander
See how the function work
Member
xla::cpu::CpuCompiler::RunHloPasses
(
HloModule
*module, bool is_aot_compile)
See what those invariant checker and pass do
unknown argument
Class
xla::cpu::CpuInstructionFusion
See what this pass do
Member
xla::cpu::IrEmitter::EmitComputation
(
HloComputation
*computation, const string &function_name_prefix, bool is_top_level_computation, std::vector< const HloInstruction *> *instruction_order)
See what it does
Member
xla::CreateMemoryMinimizingSequence
(const
HloModule
&module, const LogicalBuffer::SizeFunction &size_function, const MemorySchedulerAlgorithm &algorithm)
Trace who decide who should be fuse
Class
xla::HloVerifier
Make sure what it check. For now it seems like it checks for HLO shape
Member
xla::TuplePointsToAnalysis::Run
(const
HloModule
*module)
Too deep. Abort.
Generated by
1.8.14