python3.6,tensorflow1.11

测试代码:

tensorflow在eager模式下进行测试,方便调试,查看中间结果

 import tensorflow as tf

 tf.enable_eager_execution()

 batch_size = 4
input = tf.random_normal(shape=[3, batch_size, 6], dtype=tf.float32)
cell = tf.nn.rnn_cell.BasicLSTMCell(10, forget_bias=1.0, state_is_tuple=True)
init_state = cell.zero_state(batch_size, dtype=tf.float32)
seq_length = tf.constant([2,3,2,3],dtype=tf.int32)
import pdb; pdb.set_trace()
output, final_state = tf.nn.dynamic_rnn(cell, input, initial_state=init_state,sequence_length=seq_length,time_major=True) #time_major如果是True,就表示RNN的steps用第一个维度表示,建议用这个,运行速度快一点。
#如果是False,那么输入的第二个维度就是steps。
#如果是True,output的维度是[steps, batch_size, depth],反之就是[batch_size, max_time, depth]。就是和输入是一样的
#final_state就是整个LSTM输出的最终的状态,包含c和h。c和h的维度都是[batch_size, n_hidden]

tf.nn.dynamic_rnn在tensorflow/python/ops/rnn.py中定义,进入其中调试

 @tf_export("nn.dynamic_rnn")
def dynamic_rnn(cell, inputs, sequence_length=None, initial_state=None,
dtype=None, parallel_iterations=None, swap_memory=False,
time_major=False, scope=None):
"""Creates a recurrent neural network specified by RNNCell `cell`. Performs fully dynamic unrolling of `inputs`. Example: ```python
# create a BasicRNNCell
rnn_cell = tf.nn.rnn_cell.BasicRNNCell(hidden_size) # 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size] # defining initial state
initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32) # 'state' is a tensor of shape [batch_size, cell_state_size]
outputs, state = tf.nn.dynamic_rnn(rnn_cell, input_data,
initial_state=initial_state,
dtype=tf.float32)
``` ```python
# create 2 LSTMCells
rnn_layers = [tf.nn.rnn_cell.LSTMCell(size) for size in [128, 256]] # create a RNN cell composed sequentially of a number of RNNCells
multi_rnn_cell = tf.nn.rnn_cell.MultiRNNCell(rnn_layers) # 'outputs' is a tensor of shape [batch_size, max_time, 256]
# 'state' is a N-tuple where N is the number of LSTMCells containing a
# tf.contrib.rnn.LSTMStateTuple for each cell
outputs, state = tf.nn.dynamic_rnn(cell=multi_rnn_cell,
inputs=data,
dtype=tf.float32)
``` Args:
cell: An instance of RNNCell.
inputs: The RNN inputs.
If `time_major == False` (default), this must be a `Tensor` of shape:
`[batch_size, max_time, ...]`, or a nested tuple of such
elements.
If `time_major == True`, this must be a `Tensor` of shape:
`[max_time, batch_size, ...]`, or a nested tuple of such
elements.
This may also be a (possibly nested) tuple of Tensors satisfying
this property. The first two dimensions must match across all the inputs,
but otherwise the ranks and other shape components may differ.
In this case, input to `cell` at each time-step will replicate the
structure of these tuples, except for the time dimension (from which the
time is taken).
The input to `cell` at each time step will be a `Tensor` or (possibly
nested) tuple of Tensors each with dimensions `[batch_size, ...]`.
sequence_length: (optional) An int32/int64 vector sized `[batch_size]`.
Used to copy-through state and zero-out outputs when past a batch
element's sequence length. So it's more for performance than correctness.
initial_state: (optional) An initial state for the RNN.
If `cell.state_size` is an integer, this must be
a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`.
If `cell.state_size` is a tuple, this should be a tuple of
tensors having shapes `[batch_size, s] for s in cell.state_size`.
dtype: (optional) The data type for the initial state and expected output.
Required if initial_state is not provided or RNN state has a heterogeneous
dtype.
parallel_iterations: (Default: 32). The number of iterations to run in
parallel. Those operations which do not have any temporal dependency
and can be run in parallel, will be. This parameter trades off
time for space. Values >> 1 use more memory but take less time,
while smaller values use less memory but computations take longer.
swap_memory: Transparently swap the tensors produced in forward inference
but needed for back prop from GPU to CPU. This allows training RNNs
which would typically not fit on a single GPU, with very minimal (or no)
performance penalty.
time_major: The shape format of the `inputs` and `outputs` Tensors.
If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`.
If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`.
Using `time_major = True` is a bit more efficient because it avoids
transposes at the beginning and end of the RNN calculation. However,
most TensorFlow data is batch-major, so by default this function
accepts input and emits output in batch-major form.
scope: VariableScope for the created subgraph; defaults to "rnn". Returns:
A pair (outputs, state) where: outputs: The RNN output `Tensor`. If time_major == False (default), this will be a `Tensor` shaped:
`[batch_size, max_time, cell.output_size]`. If time_major == True, this will be a `Tensor` shaped:
`[max_time, batch_size, cell.output_size]`. Note, if `cell.output_size` is a (possibly nested) tuple of integers
or `TensorShape` objects, then `outputs` will be a tuple having the
same structure as `cell.output_size`, containing Tensors having shapes
corresponding to the shape data in `cell.output_size`. state: The final state. If `cell.state_size` is an int, this
will be shaped `[batch_size, cell.state_size]`. If it is a
`TensorShape`, this will be shaped `[batch_size] + cell.state_size`.
If it is a (possibly nested) tuple of ints or `TensorShape`, this will
be a tuple having the corresponding shapes. If cells are `LSTMCells`
`state` will be a tuple containing a `LSTMStateTuple` for each cell. Raises:
TypeError: If `cell` is not an instance of RNNCell.
ValueError: If inputs is None or an empty list.
"""
rnn_cell_impl.assert_like_rnncell("cell", cell) with vs.variable_scope(scope or "rnn") as varscope:
# Create a new scope in which the caching device is either
# determined by the parent scope, or is set to place the cached
# Variable using the same placement as for the rest of the RNN.
if _should_cache():
if varscope.caching_device is None:
varscope.set_caching_device(lambda op: op.device) # By default, time_major==False and inputs are batch-major: shaped
# [batch, time, depth]
# For internal calculations, we transpose to [time, batch, depth]
flat_input = nest.flatten(inputs) if not time_major:
# (B,T,D) => (T,B,D)
flat_input = [ops.convert_to_tensor(input_) for input_ in flat_input]
flat_input = tuple(_transpose_batch_time(input_) for input_ in flat_input) parallel_iterations = parallel_iterations or 32
if sequence_length is not None:
sequence_length = math_ops.to_int32(sequence_length)
if sequence_length.get_shape().ndims not in (None, 1):
raise ValueError(
"sequence_length must be a vector of length batch_size, "
"but saw shape: %s" % sequence_length.get_shape())
sequence_length = array_ops.identity( # Just to find it in the graph.
sequence_length, name="sequence_length") batch_size = _best_effort_input_batch_size(flat_input) if initial_state is not None:
state = initial_state
else:
if not dtype:
raise ValueError("If there is no initial_state, you must give a dtype.")
if getattr(cell, "get_initial_state", None) is not None:
state = cell.get_initial_state(
inputs=None, batch_size=batch_size, dtype=dtype)
else:
state = cell.zero_state(batch_size, dtype) def _assert_has_shape(x, shape):
x_shape = array_ops.shape(x)
packed_shape = array_ops.stack(shape)
return control_flow_ops.Assert(
math_ops.reduce_all(math_ops.equal(x_shape, packed_shape)),
["Expected shape for Tensor %s is " % x.name,
packed_shape, " but saw shape: ", x_shape]) if not context.executing_eagerly() and sequence_length is not None:
# Perform some shape validation
with ops.control_dependencies(
[_assert_has_shape(sequence_length, [batch_size])]):
sequence_length = array_ops.identity(
sequence_length, name="CheckSeqLen") inputs = nest.pack_sequence_as(structure=inputs, flat_sequence=flat_input) (outputs, final_state) = _dynamic_rnn_loop(
cell,
inputs,
state,
parallel_iterations=parallel_iterations,
swap_memory=swap_memory,
sequence_length=sequence_length,
dtype=dtype) # Outputs of _dynamic_rnn_loop are always shaped [time, batch, depth].
# If we are performing batch-major calculations, transpose output back
# to shape [batch, time, depth]
if not time_major:
# (T,B,D) => (B,T,D)
outputs = nest.map_structure(_transpose_batch_time, outputs) return (outputs, final_state)

最后调用_dynamic_rnn_loop

 def _dynamic_rnn_loop(cell,
inputs,
initial_state,
parallel_iterations,
swap_memory,
sequence_length=None,
dtype=None):
"""Internal implementation of Dynamic RNN. Args:
cell: An instance of RNNCell.
inputs: A `Tensor` of shape [time, batch_size, input_size], or a nested
tuple of such elements.
initial_state: A `Tensor` of shape `[batch_size, state_size]`, or if
`cell.state_size` is a tuple, then this should be a tuple of
tensors having shapes `[batch_size, s] for s in cell.state_size`.
parallel_iterations: Positive Python int.
swap_memory: A Python boolean
sequence_length: (optional) An `int32` `Tensor` of shape [batch_size].
dtype: (optional) Expected dtype of output. If not specified, inferred from
initial_state. Returns:
Tuple `(final_outputs, final_state)`.
final_outputs:
A `Tensor` of shape `[time, batch_size, cell.output_size]`. If
`cell.output_size` is a (possibly nested) tuple of ints or `TensorShape`
objects, then this returns a (possibly nested) tuple of Tensors matching
the corresponding shapes.
final_state:
A `Tensor`, or possibly nested tuple of Tensors, matching in length
and shapes to `initial_state`.
Raises:
ValueError: If the input depth cannot be inferred via shape inference
from the inputs.
"""
import pdb;pdb.set_trace()
state = initial_state
assert isinstance(parallel_iterations, int), "parallel_iterations must be int" state_size = cell.state_size#LSTMStateTuple(c=10, h=10) flat_input = nest.flatten(inputs)#list,~[0].shape=TensorShape([Dimension(3), Dimension(4), Dimension(6)])
flat_output_size = nest.flatten(cell.output_size)#[10] # Construct an initial output
input_shape = array_ops.shape(flat_input[0])#array([3, 4, 6]
time_steps = input_shape[0]#
batch_size = _best_effort_input_batch_size(flat_input)# inputs_got_shape = tuple(input_.get_shape().with_rank_at_least(3)
for input_ in flat_input)#(TensorShape([Dimension(3), Dimension(4), Dimension(6)]),) const_time_steps, const_batch_size = inputs_got_shape[0].as_list()[:2]#3,4 for shape in inputs_got_shape:
if not shape[2:].is_fully_defined():
raise ValueError(
"Input size (depth of inputs) must be accessible via shape inference,"
" but saw value None.")
got_time_steps = shape[0].value#
got_batch_size = shape[1].value#
if const_time_steps != got_time_steps:
raise ValueError(
"Time steps is not the same for all the elements in the input in a "
"batch.")
if const_batch_size != got_batch_size:
raise ValueError(
"Batch_size is not the same for all the elements in the input.") # Prepare dynamic conditional copying of state & output
def _create_zero_arrays(size):
size = _concat(batch_size, size)
return array_ops.zeros(
array_ops.stack(size), _infer_state_dtype(dtype, state)) flat_zero_output = tuple(_create_zero_arrays(output)
for output in flat_output_size)#tuple,~[0].shape:TensorShape([Dimension(4), Dimension(10)])
zero_output = nest.pack_sequence_as(structure=cell.output_size,
flat_sequence=flat_zero_output)#TensorShape([Dimension(4), Dimension(10)]) if sequence_length is not None:
min_sequence_length = math_ops.reduce_min(sequence_length)#
max_sequence_length = math_ops.reduce_max(sequence_length)#
else:
max_sequence_length = time_steps time = array_ops.constant(0, dtype=dtypes.int32, name="time") with ops.name_scope("dynamic_rnn") as scope:
base_name = scope def _create_ta(name, element_shape, dtype):
return tensor_array_ops.TensorArray(dtype=dtype,
size=time_steps,
element_shape=element_shape,
tensor_array_name=base_name + name) in_graph_mode = not context.executing_eagerly()
if in_graph_mode:
output_ta = tuple(
_create_ta(
"output_%d" % i,
element_shape=(tensor_shape.TensorShape([const_batch_size])
.concatenate(
_maybe_tensor_shape_from_tensor(out_size))),
dtype=_infer_state_dtype(dtype, state))
for i, out_size in enumerate(flat_output_size))
input_ta = tuple(
_create_ta(
"input_%d" % i,
element_shape=flat_input_i.shape[1:],
dtype=flat_input_i.dtype)
for i, flat_input_i in enumerate(flat_input))
input_ta = tuple(ta.unstack(input_)
for ta, input_ in zip(input_ta, flat_input))
else:
output_ta = tuple([0 for _ in range(time_steps.numpy())]
for i in range(len(flat_output_size)))#([0, 0, 0],)
input_ta = flat_input##list,~[0].shape=TensorShape([Dimension(3), Dimension(4), Dimension(6)]) def _time_step(time, output_ta_t, state):
"""Take a time step of the dynamic RNN. Args:
time: int32 scalar Tensor.
output_ta_t: List of `TensorArray`s that represent the output.
state: nested tuple of vector tensors that represent the state. Returns:
The tuple (time + 1, output_ta_t with updated flow, new_state).
"""
import pdb;pdb.set_trace()
if in_graph_mode:
input_t = tuple(ta.read(time) for ta in input_ta)
# Restore some shape information
for input_, shape in zip(input_t, inputs_got_shape):
input_.set_shape(shape[1:])
else:
input_t = tuple(ta[time.numpy()] for ta in input_ta)3#TensorShape([Dimension(4), Dimension(6)]) input_t = nest.pack_sequence_as(structure=inputs, flat_sequence=input_t)#TensorShape([Dimension(4), Dimension(6)])
# Keras RNN cells only accept state as list, even if it's a single tensor.
is_keras_rnn_cell = _is_keras_rnn_cell(cell)
if is_keras_rnn_cell and not nest.is_sequence(state):
state = [state]
call_cell = lambda: cell(input_t, state) if sequence_length is not None:
(output, new_state) = _rnn_step(
time=time,
sequence_length=sequence_length,
min_sequence_length=min_sequence_length,
max_sequence_length=max_sequence_length,
zero_output=zero_output,
state=state,
call_cell=call_cell,
state_size=state_size,
skip_conditionals=True)
else:
(output, new_state) = call_cell() # Keras cells always wrap state as list, even if it's a single tensor.
if is_keras_rnn_cell and len(new_state) == 1:
new_state = new_state[0]
# Pack state if using state tuples
output = nest.flatten(output) if in_graph_mode:
output_ta_t = tuple(
ta.write(time, out) for ta, out in zip(output_ta_t, output))
else:
for ta, out in zip(output_ta_t, output):
ta[time.numpy()] = out return (time + 1, output_ta_t, new_state) if in_graph_mode:
# Make sure that we run at least 1 step, if necessary, to ensure
# the TensorArrays pick up the dynamic shape.
loop_bound = math_ops.minimum(
time_steps, math_ops.maximum(1, max_sequence_length))
else:
# Using max_sequence_length isn't currently supported in the Eager branch.
loop_bound = time_steps# _, output_final_ta, final_state = control_flow_ops.while_loop(
cond=lambda time, *_: time < loop_bound,
body=_time_step,
loop_vars=(time, output_ta, state),
parallel_iterations=parallel_iterations,
maximum_iterations=time_steps,
swap_memory=swap_memory) # Unpack final output if not using output tuples.
if in_graph_mode:
final_outputs = tuple(ta.stack() for ta in output_final_ta)
# Restore some shape information
for output, output_size in zip(final_outputs, flat_output_size):
shape = _concat(
[const_time_steps, const_batch_size], output_size, static=True)
output.set_shape(shape)
else:
final_outputs = output_final_ta final_outputs = nest.pack_sequence_as(
structure=cell.output_size, flat_sequence=final_outputs)
if not in_graph_mode:
final_outputs = nest.map_structure_up_to(
cell.output_size, lambda x: array_ops.stack(x, axis=0), final_outputs) return (final_outputs, final_state)

可以看到dynamic_rnn主要是利用while_loop处理不同Batch长度不同的问题

从上面82-86行看出,如果不给sequence_length参数,sequence_length=time_step=input.shape[0],当给定参数sequence_length时,调用_rnn_step函数,对超出长度的部分output设0,这一点在下面代码60,70行实现

 def _rnn_step(
time, sequence_length, min_sequence_length, max_sequence_length,
zero_output, state, call_cell, state_size, skip_conditionals=False):
"""Calculate one step of a dynamic RNN minibatch. Returns an (output, state) pair conditioned on `sequence_length`.
When skip_conditionals=False, the pseudocode is something like: if t >= max_sequence_length:
return (zero_output, state)
if t < min_sequence_length:
return call_cell() # Selectively output zeros or output, old state or new state depending
# on whether we've finished calculating each row.
new_output, new_state = call_cell()
final_output = np.vstack([
zero_output if time >= sequence_length[r] else new_output_r
for r, new_output_r in enumerate(new_output)
])
final_state = np.vstack([
state[r] if time >= sequence_length[r] else new_state_r
for r, new_state_r in enumerate(new_state)
])
return (final_output, final_state) Args:
time: int32 `Tensor` scalar.
sequence_length: int32 `Tensor` vector of size [batch_size].
min_sequence_length: int32 `Tensor` scalar, min of sequence_length.
max_sequence_length: int32 `Tensor` scalar, max of sequence_length.
zero_output: `Tensor` vector of shape [output_size].
state: Either a single `Tensor` matrix of shape `[batch_size, state_size]`,
or a list/tuple of such tensors.
call_cell: lambda returning tuple of (new_output, new_state) where
new_output is a `Tensor` matrix of shape `[batch_size, output_size]`.
new_state is a `Tensor` matrix of shape `[batch_size, state_size]`.
state_size: The `cell.state_size` associated with the state.
skip_conditionals: Python bool, whether to skip using the conditional
calculations. This is useful for `dynamic_rnn`, where the input tensor
matches `max_sequence_length`, and using conditionals just slows
everything down. Returns:
A tuple of (`final_output`, `final_state`) as given by the pseudocode above:
final_output is a `Tensor` matrix of shape [batch_size, output_size]
final_state is either a single `Tensor` matrix, or a tuple of such
matrices (matching length and shapes of input `state`). Raises:
ValueError: If the cell returns a state tuple whose length does not match
that returned by `state_size`.
"""
import pdb;pdb.set_trace()
# Convert state to a list for ease of use
flat_state = nest.flatten(state)#[c,h],shape=[4,10]
flat_zero_output = nest.flatten(zero_output)#list,~[0].shape:TensorShape([Dimension(4), Dimension(10)]) # Vector describing which batch entries are finished.
copy_cond = time >= sequence_length#step1:array([False, False, False, False]) def _copy_one_through(output, new_output):
# TensorArray and scalar get passed through.
if isinstance(output, tensor_array_ops.TensorArray):
return new_output
if output.shape.ndims == 0:
return new_output
# Otherwise propagate the old or the new value.
with ops.colocate_with(new_output):
return array_ops.where(copy_cond, output, new_output)#多余的取0 def _copy_some_through(flat_new_output, flat_new_state):
# Use broadcasting select to determine which values should get
# the previous state & zero output, and which values should get
# a calculated state & output.
flat_new_output = [
_copy_one_through(zero_output, new_output)
for zero_output, new_output in zip(flat_zero_output, flat_new_output)]
flat_new_state = [
_copy_one_through(state, new_state)
for state, new_state in zip(flat_state, flat_new_state)]
return flat_new_output + flat_new_state def _maybe_copy_some_through():
"""Run RNN step. Pass through either no or some past state."""
new_output, new_state = call_cell() nest.assert_same_structure(state, new_state) flat_new_state = nest.flatten(new_state)
flat_new_output = nest.flatten(new_output)
return control_flow_ops.cond(
# if t < min_seq_len: calculate and return everything
time < min_sequence_length, lambda: flat_new_output + flat_new_state,
# else copy some of it through
lambda: _copy_some_through(flat_new_output, flat_new_state)) # TODO(ebrevdo): skipping these conditionals may cause a slowdown,
# but benefits from removing cond() and its gradient. We should
# profile with and without this switch here.
if skip_conditionals:
# Instead of using conditionals, perform the selective copy at all time
# steps. This is faster when max_seq_len is equal to the number of unrolls
# (which is typical for dynamic_rnn).
new_output, new_state = call_cell()
nest.assert_same_structure(state, new_state)
new_state = nest.flatten(new_state)#[c,h],shape=(4, 10)
new_output = nest.flatten(new_output)#shape=(4, 10)
final_output_and_state = _copy_some_through(new_output, new_state)
else:
empty_update = lambda: flat_zero_output + flat_state
final_output_and_state = control_flow_ops.cond(
# if t >= max_seq_len: copy all state through, output zeros
time >= max_sequence_length, empty_update,
# otherwise calculation is required: copy some or all of it through
_maybe_copy_some_through) if len(final_output_and_state) != len(flat_zero_output) + len(flat_state):
raise ValueError("Internal error: state and output were not concatenated "
"correctly.")
final_output = final_output_and_state[:len(flat_zero_output)]
final_state = final_output_and_state[len(flat_zero_output):] for output, flat_output in zip(final_output, flat_zero_output):
output.set_shape(flat_output.get_shape())
for substate, flat_substate in zip(final_state, flat_state):
if not isinstance(substate, tensor_array_ops.TensorArray):
substate.set_shape(flat_substate.get_shape()) final_output = nest.pack_sequence_as(
structure=zero_output, flat_sequence=final_output)
final_state = nest.pack_sequence_as(
structure=state, flat_sequence=final_state) return final_output, final_state

tensorflow dynamic rnn源码分析的更多相关文章

  1. [阿里DIEN] 深度兴趣进化网络源码分析 之 Keras版本

    [阿里DIEN] 深度兴趣进化网络源码分析 之 Keras版本 目录 [阿里DIEN] 深度兴趣进化网络源码分析 之 Keras版本 0x00 摘要 0x01 背景 1.1 代码进化 1.2 Deep ...

  2. [阿里DIN] 深度兴趣网络源码分析 之 整体代码结构

    [阿里DIN] 深度兴趣网络源码分析 之 整体代码结构 目录 [阿里DIN] 深度兴趣网络源码分析 之 整体代码结构 0x00 摘要 0x01 文件简介 0x02 总体架构 0x03 总体代码 0x0 ...

  3. ABP源码分析三十七:ABP.Web.Api Script Proxy API

    ABP提供Script Proxy WebApi为所有的Dynamic WebApi生成访问这些WebApi的JQuery代理,AngularJs代理以及TypeScriptor代理.这些个代理就是j ...

  4. spring源码分析之spring-core总结篇

    1.spring-core概览 spring-core是spring框架的基石,它为spring框架提供了基础的支持. spring-core从源码上看,分为6个package,分别是asm,cgli ...

  5. MyBatis源码分析(2)—— Plugin原理

    @(MyBatis)[Plugin] MyBatis源码分析--Plugin原理 Plugin原理 Plugin的实现采用了Java的动态代理,应用了责任链设计模式 InterceptorChain ...

  6. Struts2 源码分析——Hello world

    新建第一个应用程序 上一章我们讲到了关于struts2核心机制.对于程序员来讲比较概念的一章.而本章笔者将会亲手写一个Hello world的例子.所以如果对struts2使用比较了解的朋友,请跳过本 ...

  7. HTTP严格安全传输(HTTP Strict Transport Security, HSTS)chromuim实现源码分析(一)

    // HTTP strict transport security (HSTS) is defined in// http://tools.ietf.org/html/ietf-websec-stri ...

  8. springMVC源码分析--动态样式ThemeResolver(二)

    在上一篇博客springMVC源码分析--动态样式ThemeResolver(一)中我们介绍了多样式ThemeResolver的使用方法,接下来我们对源码进行简单的分析一下. ThemeResolve ...

  9. Spring Aop源码分析

    最近看了SpringAop的源码实现  大概记录一下aop的源码流程 创建一个最简单的一个测试类 package com.zcg.learn.Test; import org.aopalliance. ...

随机推荐

  1. NO JSP Support for /, did not find org.eclipse.jetty.jsp.JettyJspServlet

    springMVC 内嵌jetty时,出现了NO JSP Support for /, did not find org.eclipse.jetty.jsp.JettyJspServlet: 添加了 ...

  2. 旅行规划(travel)

    题目描述 OIVillage 是一个风景秀美的乡村,为了更好的利用当地的旅游资源,吸引游客,推动经济发展,xkszltl 决定修建了一条铁路将当地 nnn 个最著名的经典连接起来,让游客可以通过火车从 ...

  3. 一种简单高效的音频降噪算法示例(附完整C代码)

    近期比较忙, 抽空出来5.1开源献礼. 但凡学习音频降噪算法的朋友,肯定看过一个算法. <<语音增强-理论与实践>> 中提及到基于对数的最小均方误差的降噪算法,也就是LogMM ...

  4. Visual Studio中的/MD, /MT, /MDd, /MTd 选项

    Visual Studio中/MD, /MT, /MDd, /MTd表示多线程模块是否为dll.对于这几个选项我的理解如下: /MD: 定义了_MT和_DLL,让程序用多线程和dll版本的运行库. / ...

  5. RSA加密/解密 Decryption error异常解决

    RSA加密/解密 Decryption error异常解决 import java.io.ByteArrayOutputStream; import java.security.Key; import ...

  6. bzoj 1314: River过河 树套树+单调队列

    Description ZY带N个小Kid过河,小KID分成两种:高一年级,高二年级,由于存在代沟问题,如果同一条船上高一年级生和高二年级生数量之差超过K,就会发生不和谐的事件.当然如果一条船上全是同 ...

  7. Photoshop  cs6 快捷键命令大全

    工具箱(多种工具共用一个快捷键的可同时按[Shift]加此快捷键选取) 矩形.椭圆选框工具.单行单列选取工具 [M] 裁剪工具.透视.切片.透视裁剪工具 [C] 移动工具 [V] 套索.多边形套索.磁 ...

  8. linux反汇编

    使用objdump参数可以: -a, --archive-headers    显示压缩头信息   -f, --file-headers       显示目录头总览   -p, --private-h ...

  9. 有关BOM(Browser Object Model)的内容

    包括: BOM概述 BOM模型 Window对象(常用属性和方法,窗口的打开,窗口的关闭,模态对话框,定时器) Navigator对象(遍历navigator对象的所有属性,Navigator 对象集 ...

  10. 推荐一本迷你中文书《JavaScript Promise迷你书(中文版)》

    https://github.com/azu/promises-book http://it-ebooks24.com/ebook/mastering-javascript-promises 传值,调 ...