




版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
Monte
Carlo
algorithmSummary2IntroductionMajority
element
problemMin
cut
problemPrime
testingPattern
matchingAmplification
of
stochastic
advantageIntroduction3
A
Monte
Carlo
algorithm
occasionally
makes
a
mistabut
it
finds
a
correct
solution
with
high
probabiliwhatever
the
instance
considered-No
instance
on
which
a
probability
of
error
is
high-No
warning
when
the
algorithm
makes
a
mistake
A
Monte
Carlo
algorithm
is
p-correct
if
it
returnscorrect
answer
with
probability
at
least
p,
whatevinstance
considered
(0<p<1)
p
can
depend
on
the
size
of
the
instance,
but
not
thinstance
itself4
Amplifying
the
stochastic
advantage:
the
errorprobability
can
often
be
reduced
arbitrarily
at
tha
slight
increase
of
computing
time.
Unfortunately,
there
is
usually
no
efficient
methavailable
to
trust
whether
an
answer
returned
by
aMonte
Carlo
algorithm
for
a
given
input
is
correct.5Biased
Monte
Carlo
Algorithms6
A
Monte
Carlo
algorithm
for
a
decision
problem
isfalse-biased
if
it
is
always
correct
when
it
returnvalue
“false”
and
only
has
some
(hopefully
smallprobability
of
making
a
mistake
when
returning
thevalue
“true”.
A
similar
definition
holds
for
true-biased
Monte
Calgorithms.
Fundamental
to
the
applicability
of
a
Monte
Carloalgorithm
is
the
fact
that
the
probability
of
returcorrect
output
increases
with
repeated
trials.7MCRepeat(k)8for
i
←
1
to
k
doif
(MC
returns
“false”)then
return
falsereturn
true
Proposition
Suppose
that
we
have
a
p-correct
falsebiased
Monte
Carlo
algorithm
MC.
Then
the
algorithMCRepeat(k)
is
a
(1
–
(1
–
p)k)-correct
false-biaMonte
Carlo
algorithm.9Proof10Clearly,
repetition
of
a
false-biased
algorithm
rein
a
false-biased
algorithm.
For
a
given
input
I,
nthat
the
probability
that
we
run
the
algorithm
ktimes
and
falsely
get
“true”
each
time
is
(1
–
pHence,
if
pk(I)
denotes
the
probability
thatMCRepeat(k)
outputs
the
correct
answer
for
a
giveninput
I,
we
see
that
pk(I)≥(1–(1–p)k).
To
illustrate
Proposition,
suppose
we
have
aMonte
Carlo
false-biased
algorithm
MC
for
that
i0.4-correct.
Then
even
for
k
as
small
as
23,MCRepeat(k)
is
0.99999-correct.
Thus,
we
can
bepretty
confident
in
the
correctness
ofMCRepeat(23)
if
it
returns
“true”
(we
knowit’s
correct
if
it
returns
“false”
),
sinceotherwise
an
event
has
occurred
whose
probabilitis
not
greater
than
0.00001.11
If
we
have
an
unbiased
p-correct
algorithm
for
adecision
problem
with
p≤1/2,
then
repeating
thealgorithm
doesn’t
ramp
up
the
correctness.
Forunbiased
p-correct
Monte
Carlo
algorithms
with
p>1the
correctness
can
be
improved
by
repeating
thealgorithm
sufficiently
many
times
and
choosing
theanswer
occurring
most
often.12Majority
element
problem13 Given
an
array
with
n
components,
the
problem
is
todetermine
whether
this
array
contains
a
majorityelement
or
not.
x
is
a
majority
element
in
an
arraylength
n
if
more
than
n/2
components
equal
x. Deterministic
approach
would
be
to
sort
the
arraythen
determine
the
frequency
of
each
element.
In
asorted
array,
all
common
values
would
be
groupedtogether.
If
the
array
did
have
a
majority
element,
QuickSorwould
pivot
on
that
element
a
majority
of
the
timeresulting
in
poor
performance,
that
is
a
relativel‘slow’
sort.
This
behavior
holds
true
even
if
thepivot
element
is
picked
randomly.
The
deterministiprocedure
to
determine
a
majority
element
would
hacomplexity
between
O(n*lg(n))
and
O(n2
).14
One
possible
Monte
Carlo
version
of
the
majorityproblem
would
be
to
randomly
pick
an
element,
and
sif
it
is
the
majority
element.
If
it
is,
return
truereturn
false.15IsItMajority(a,
n)16x
←
a[Randomi(1,n)]m
←
0for
i
←
1
to
n
doif
(x=a[i])
then5
m
←
m+1if
((2*m)>n)
thenreturn
trueelsereturn
false
If
there
is
no
majority
element
in
a,
a
always
returcorrect
answer,
which
is
false.
Suppose
a
containsmajority
element
Xm
and
that
Xm
appears
k
times
in
tarray
a.
Then
IsItMajority
returns
the
correct
anstrue
with
probability
p=(k/n)>0.5.
If
the
value
“is
returned,
the
array
a
has
found
a
majority
elemeif
“false”
is
returned,
nothing
is
known.17
For
the
purposes
of
simple
analysis,
let
Xm
occur
e(n+1)/2
times
in
an
array
a
of
length
n.
Clearly,
Xm
majority
element.
Unfortunately,
the
boolean
funcIsItMajority
only
returns
the
correct
answer
“truwith
probability
p
and
returns
the
incorrect
answe“false”
with
probability
(1-p).
Note
that
if
n
isboth
p
and
(1-p)
are
approximately
1/2.18
Remember
that
if
a
does
not
contain
a
majorityelement,
every
trail
of
the
Boolean
valuedfunction
IsItMajority
returns
the
correct
answeε>0
denotes
an
acceptable
level
of
risk
or
error,then
the
number
of
trails
needed
to
achieve
thatlevel
of
certainty
is
the
smallest
integer
k
suchk
=(lg(1/ε))If
ε
=0.01,
then
k=(lg(100))
=
7.19RepeatedIsItMajority
(a,n,ε)1
k
←for
i
←
1
to
k
doif
IsItMajority(a,
n)
thenreturn
truereturn
false203.Min
cut
problem21
Graphs
offer
a
more
complex
example
ofrandomized
algorithms.
Imagine
we
have
a
large
graph
called
G,
to
makethings
easier,
we
consider
special
types
of
grap-G
is
undirected,
which
means
that
there
can"treally
be
any
loops
in
it.-G
is
a
multi-graph,
so
there
can
be
more
than
oneedge
between
nodes.-All
edges
in
G
are
of
unit-weight.
Given
this
graph
G,
we
need
to
fix
some
notationfor
operations
that
we
can
perform
on
it.
A
cut
is
the
act
of
partitioning
the
nodes
into
twmutually
exclusive
sets
B
and
C.-Any
edges
between
nodes
in
B
and
nodes
in
C
aresaid
to
be
in
the
cut.-The
weight
of
the
cut
is
the
sum
of
weights
onedges
with
one
node
in
each
of
the
two
sets.22
If
e
is
an
edge,
we
use
G/e
to
denote
the
graph
madecontracting
the
edge
e
in
the
graph
G:-Contracting
an
edge
e
=
(u,v)
means
adding
a
new
nox,replacing
any
edge
(u,y)
or
(u,y)
with
(x,y),
anddeleting
u
and
v
from
G.23
For
example,
say
we
cut
this
graph
into({a,b,c},{d}),
the
weight
is
2.24
The
maximum
cut
and
minimum
cut
problems
can
beposed:-Find
a
cut
with
the
maximum
or
minimum
weight.Question:How
do
we
find
the
minimum
cut
of
a
graph
?25
Believe
it
or
not,
these
problems
are
important
ireal
life:-Connections
and
nodes
in
the
Internet
form
agraph.-We
want
the
Internet
to
be
a
well
connected
graphIf
one
connection
goes
down,
routing
is
done
viaanother.-The
minimum
cut
of
the
graph
can
tell
us
howwell
connected
the
graph
is.26
For
a
finite
graph
G,
there
are
clearly
a
finite
numcuts:-Obviously
if
there
are
finite
many
nodes,
there
habe
finite
many
edges
between
them.-However,
enumerating
them
all
for
large
graphs
isintractable.27RandomMincut(G)initially
S(v)
=
{v}
for
each
vwhile
|G|
>
2
dochoose
an
edge
e
=
(u,
v)
of
G
at
randomreplacing
u
and
v
with
xdelete
the
edge
e
from
Gdefine
S(x)
=
S(u)
∪
S(v)return
the
cut
(S(v1),
S(v2))28329630
Given
these
two
examples,
it
doesn"t
seem
the
algogives
the
minimum
cut
at
all
!-It
won"t
always
give
the
right
answer
but
it
will
whigh
probability.-This
can
therefore
be
classed
as
a
Monte
Carloalgorithm.31A
cut
in
G/e
induces,
or
implies,
a
cut
in
G:-In
other
words
for
any
cut
in
G/e,
there
is
a
cut
inthe
same
weight.-The
nodes
merge
together:
they
always
stay
in
the
spart
of
the
cut
when
undone.32
Contracting
an
edge
cannot
therefore
decrease
thminimum
cut
size:-So
by
repeated
contraction,
we
can
estimate
theminimum
cut
of
G.
In
order
to
cope
with
the
fact
that
a
single
runmight
give
the
wrong
answer,
we
run
thealgorithm
many
times.-Simply,
this
improves
the
probability
of
ourestimates
being
right.33
For
example,
it
will
be
shown
that
the
randomalgorithm
gives
the
minimum
cut
of
a
graphcontaining
n
nodes
with
probability:2/n(n-1).
This
isn"t
very
good,
but
we
can
just
run
thealgorithm
many
times.-For
example
if
we
run
it
n2logn
times
and
pick
theminimum
results,
this
will
be
the
right
result
wihigh
probability.34Analyzing
the
Algorithm35
Theorem
The
Contraction
Algorithm
returns
a
globamin-cut
of
G
with
probability
at
least
2/(n(n-1)).The
Number
of
Global
Minimum
CutsCorollary
An
undirected
graph
G
=
(V,
E)
on
nnodes
has
at
most
n(n-1)/2
global
min-cuts.Proof.Let
C1,…,Cr
denote
all
its
global
min-cuts.Let
εi
denote
the
event
that
Ci
is
returned
by
theContraction
Algorithm.Let
ε
denote
the
event
that
the
algorithm
returnsany
global
min-cut.36Pr{ε}≥2/n(n-1),
its
proof
actuallyshows
that
for
each
i,
we
have
Pr{εi}≥
2/n(n-1).Now
each
pair
of
events
εi
and
εj
are
disjoint.(since37one
cut
is
returned
by
any
given
run
of
the
algorithso
by
the
Union
Bound
for
disjoint
events,
we
haveBut
clearly
Pr{ε}≤1,
and
so
we
must
have
r
≤(n(n-1))/2.38The
density
of
prime
numbers39
For
many
applications
(such
as
cryptography),
we
nto
find
large
"random"
primes.
Fortunately,
largeare
not
too
rare,
so
that
it
is
not
too
time-consumitest
random
integers
of
the
appropriate
size
untilis
found.
The
prime
distribution
function
π(n)
specifies
thnumber
of
primes
that
are
less
than
or
equal
to
n.Theorem
(Prime
number
theorem)For
example,
π(10)
=
4,
since
there
are
4prime
numbers
less
than
or
equal
to
10,namely,
2,
3,
5,
and
7.
The
prime
numbertheorem
gives
a
useful
approximation
to
π(n).40
We
can
use
the
prime
number
theorem
to
estimatethe
probability
that
a
randomly
chosen
integer
nwill
turn
out
to
be
prime
as
1/
lnn.
Thus,
we
wouldneed
to
examine
approximately
lnn
integerschosen
randomly
near
n
in
order
to
find
a
primethat
is
of
the
same
length
as
n.
For
example,
to
find
a
512-bit
prime
might
requiretesting
approximately
ln2512
≈
355
randomlychosen
512-bit
numbers
for
primality.41
One
simple
approach
to
the
problem
of
testing
forprimality
is
trial
division.TestPrim(n)1for
i
←
2
todo2if
n
mod
i
=
0then3then
returnfalse4return
true42Assuming
that
each
trial
division
takes
constant
tthe
worst-case
running
time
is
Θ(n1/2),
which
isexponential
in
the
length
of
n.
(Recall
that
if
n
isencoded
in
binary
using
β
bits,
then,
ando .)
Thus,
trial
division
works
well
only
ifvery
small
or
happens
to
have
a
small
prime
factor.43Pseudoprimality
testing
Theorem
(Fermat"s
theorem)If
n
is
prime,
thenWhat
does
it
mean
if:
an-1
mod
n
≠
1?What
does
it
mean
if:
an-1
mod
n
=
1?For
example,
n=7,
26
mod
7=1n=6,
25
mod
6=244
Thus,
if
we
can
find
any
a
such
that
n
does
not
satisequation,
then
n
is
certainly
composite.
Surprisinthe
converse
almost
holds,
so
that
this
criterion
fan
almost
perfect
test
for
primality.
We
test
to
seesatisfies
equation
for
a
=2.
If
not,
we
declare
n
tocomposite.
Otherwise,
we
output
a
guess
that
n
is
pr
We
say
that
n
is
a
base-a
pseudoprime
if
n
is
composandHow
to
calculate
ab
mod
n?45ModularExponentiation(a,
b,
n)1
c
←
02
d
←
1let
bk,
bk-1,
...,
b0
be
the
binary
representation
offor
i
←
k
downto
0
doc
←
2cd
←
(d•d)
mod
nif
bi
=
1
then8
c
←
c
+
1d
←
(d
•
a)
mod
nreturn
d46For
example47b=
bkbk-1...b0=
bk2k+bk-12k-1+...+b12+b0a
=
7,
b
=
560,
and
n
=
561,
calculate
ab
mod
nb
=
560=
1000110000i
9876543210bi
1000110000c
1248173570140280560d
749157526160241298166671PseudoPrim(n)1
if
ModularExponentiation(2,
n
-
1,
n)
≠
1
(mod
n)48then
return
COMPOSITE
//Definitely.else
return
PRIME
//We
hope!
This
procedure
can
make
errors,
but
only
of
onetype.
That
is,
if
it
says
that
n
is
composite,
thenis
always
correct.
If
it
says
that
n
is
prime,however,
then
it
makes
an
error
only
if
n
is
a
base-2
pseudoprime.
How
often
does
this
procedure
err?
Surprisingly
raThere
are
only
22
values
of
n
less
than
10,000
for
wit
errs;
the
first
four
such
values
are
341,
561,
641105.
It
can
be
shown
that
the
probability
that
thiprogram
makes
an
error
on
a
randomly
chosen
β-bitnumber
goes
to
zero
as
β
→
∞.49
So
if
you
are
merely
trying
to
find
a
large
prime
fosome
application,
for
all
practical
purposes
you
anever
go
wrong
by
choosing
large
numbers
at
randomuntil
one
of
them
causes
PseudoPrim
to
output
PRIMEBut
when
the
numbers
being
tested
for
primality
arerandomly
chosen,
we
need
a
better
approach
for
testprimality.50
Unfortunately,
we
cannot
entirely
eliminate
allerrors
by
simply
checking
equation
for
a
secondbase
number,
say
a
=
3,
because
there
arecomposite
integers
n
that
satisfy
equation.
Thesintegers
are
known
as
Carmichael
numbers.
Thefirst
three
Carmichael
numbers
are
561,
1105,
an1729.
Carmichael
numbers
are
extremely
rare;there
are,
for
example,
only
255
of
them
less
than100,000,000.51
We
next
show
how
to
improve
our
primality
test
so
thit
won"t
be
fooled
by
Carmichael
numbers.
What
is
a
monte
carlo
algorithm
for
testing
if
n
isusing
fermat’s
theorem?Randomly
pick
a
and
test52The
randomized
primality
test53
The
Miller-Rabin
primality
test
overcomes
theproblems
of
the
simple
test
PseudoPrim
with
twomodifications:-It
tries
several
randomly
chosen
base
values
a
insof
just
one
base
value.-While
computing
each
modular
exponentiation,
itnotices
if
a
nontrivial
square
root
of
1,
modulo
n,discovered
during
the
final
set
of
squarings.
If
sostops
and
outputs
COMPOSITE.
Theorem
If
n
is
prime
and
0
<
x
<
n,
the
only
solutioto
x2≡1(mod
n)
are
x
=
1,
n
-
1.Proof:x2
≡
1(mod
n)
implies
that
x2
-1
≡
0(mod
n).
Thisimplies
(x
-
1)(x
+
1)
≡
0(mod
n).
Since
n
is
prime,x
<
n,
and
n
must
divide
either
(x
-
1)
or
(x
+
1),
thetheorem
follows.54Witness(a,
n)1
let
n
-
1
=
2tu,
where
t
≥
1
and
u
is
oddx0
←
ModularExponentiation(a,
u,
n)for
i
←
1
to
t
do456mod
nif
xi
=
1
and
xi-1
≠
1
and
xi-1
≠
n
-
1then
return
trueif
xt
≠
1
then
return
truereturn
false55
We
now
examine
the
Miller-Rabin
primality
testbased
on
the
use
of
WITNESS.
Again,
we
assume
thatn
is
an
odd
integer
greater
than
2.MillerRabin(n,
s)1
for
j
←
1
to
s
do56234a
←
Randomi(1,
n
-
1)if
Witness(a,
n)then
return
COMPOSITE
//Definitely.5
return
PRIME
//Almost
surely.
If
one
of
the
a"s
picked
is
a
witness
to
thecompositeness
of
n,
then
MillerRabin
outputsCOMPOSITE
on
line
4.
Such
an
output
is
alwayscorrect,
by
the
correctness
of
WITNESS.
If
no
witnecan
be
found
in
s
trials,
MillerRabin
assumes
thatis
because
there
are
no
witnesses
to
be
found,
andtherefore
n
is
assumed
to
be
prime.
We
shall
see
thathis
output
is
likely
to
be
correct
if
s
is
large
enbut
that
there
is
a
small
chance
that
the
procedurebe
unlucky
in
its
choice
of
a"s
and
that
witnesses
dexist
even
though
none
has
been
found.57
To
illustrate
the
operation
of
MillerRabin,
let
n
bCarmichael
number
561,
so
that
n-1
=
560
=
24
·
35.Supposing
that
a
=
7
is
chosen
as
a
base,
WITNESScomputes
x0
≡
a35
≡
241
(mod
561)
and
thus
computesthe
sequence
<241,298,
166,
67,
1>.
Thus,
a
nontrivsquare
root
of
1
is
discovered
in
the
last
squaringsince
a280
≡
67
(mod
n)
and
a560
≡
1
(mod
n).Therefore,
a
=
7
is
a
witness
to
the
compositeness
oWitness(7,
n)
returns
TRUE,
and
MillerRabin
returCOMPOSITE.58
It
has
been
shown
that
if
n
is
an
odd
composite
numbthen
the
number
of
witnesses
to
the
compositeness
ois
at
least
(n
-
1)/2.Theorem
For
any
odd
integer
n
>
2
and
positive
integs,
the
probability
that
MillerRabin(n,
s)
errs
is
as.595.Pattern
Matching
Given
stringsand(pattern),
where
m≤n,
the
patternmatching
problem
consists
of
finding
a
substring
oX
equal
to
Y
Without
loss
of
generality,
we
will
assume
that
thtext
alphabetApplications:Text
editorsSearch
enginesBiological
research60
The
brute-force
pattern
matching
algorithm
comparthe
pattern
Y
with
the
text
X
for
each
possible
shifrelative
to
X,
until
eithera
match
is
found,
orall
placements
of
the
pattern
have
been
triedBrute-force
pattern
matching
runs
in
time
O(nm)Example
of
worst
case:X
=
aaa
…
ahY
=
aaahmay
occur
in
images
and
DNA
sequencesunlikely
in
English
text61
There
are
more
complicated
deterministic
algorithwhose
running
time
is
O(n+m).
Here
we
will
present
a
simple
and
efficient
Monte
Calgorithm
that
also
achieves
a
running
time
of
O(nthen
we
will
convert
it
later
into
a
Las
Vegas
algor
Let
I(s)
be
the
integer
represented
by
the
bit
striOne
method
of
fingerprinting
is
to
choose
a
primenumber
q
and
then
use
the
fingerprinting
function62if
q
is
not
too
large,
then
the
fingerprinting
Iq(x)sent
as
a
short
string.
The
number
of
bits
to
betransmitted
is
thus
O(logq).
If
Iq(x)
≠
Iq(y),
theobviously
x
≠y.Let
Xj
be
Monte
Carlo
is
similar
to
brute-force
algorithm,
binstead
of
comparing
the
pattern
Y
with
Xj
,
comparethe
fingerprint
Iq(Y)
with
Iq(Xj)
The
key
step
of
this
Monte
Carlo
algorithm
is
how
tocompute
the
fingerprint
of
Xj+1
from
the
fingerprinj
.63The
computational
method
isIf
we
let
then
The
computation
of
each
of
Wq,Iq(Y)
and
Iq(1)
costsO(m)
time.
When
implementing
the
computation
ofIq(Xj+1)
from
Iq(Xj)
,only
cost
O(n)
time.
So
the
runtime
is
O(n+m)
time.6465This
product
cannot
exceed
(2m)n
,
and
hence
thenumber
of
prime
that
divide
it
cannot
exceed
π(mn)we
choose
M=2mn2,
then
the
probability
of
a
falsematch
cannot
exceed
Now
we
analyze
the
frequency
with
which
thisalgorithm
will
fail.
A
false
match
will
occur
onlysome
j
we
havebut
=This
is
only
possible
if
the
chosen
prime
q
divides,66
The
probability
of
failure
depends
only
on
the
lenthe
text
X.
To
convert
the
algorithm
into
a
Las
Vegas
algorithwhenever
the
two
fingerprints
Iq(Y)
and
Iq(Xj)
matcthe
two
strings
are
tested
for
equality.
The
expecttime
complexity
of
this
Las
Vegas
algorithm
become676.
Amplification
of
stochastic
advan68
Biased:
known
with
certainty
one
of
the
possibleanswer
is
always
correct.
Error
can
be
reduced
byrepeat
the
algorithm.Unbiased:
example
coin
flip
Is
it
still
possible
to
decrease
the
error
probabiarbitrarily
by
repeating
the
algorithm?
The
answer
is
that
it
depends
on
the
original
errorprobability.
The
first
obvious
remark
is
that
amplification
ofstochastic
advantage
is
impossible
in
general
unlep>1/2
because
there
is
always
the
worthless
½-corralgorithmStupid(I)if
coinflip=heads
then
return
trueelse
return
falseWhose
stochastic
“advantage”
cannot
be
amplified.69
Provided
p≥1/2,
define
the
advantage
of
a
p-correMonte
Carlo
algorithm
to
be
p-1/2,
namely
advantag=p-1/2.
If
ε>0,
Any
Monte
Carlo
algorithm
for
decisionproblem
can
be
turned
into
one
whose
error
probabilis
as
small
as
we
wish.
Let
MC
be
a
¾-correct
unbiased
Monte
Carlo
algoritto
solve
some
decision
problem.70LetMC3(I)t←MC(I);
u←MC(I);
v←MC(I)if
t=u
or
t=v
then
return
telse
return
uWhat
is
the
error
probability
of
MC3?71MC3
is
27/32-correct(
>
84%
)72Let
R
and
W
denote
the
right
and
wrong
answers
More
generally,
let
MC
be
a
Monte
Carlo
algorithmsolve
some
decision
problem
with
ε>0,
cons
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 2025年直联式真空泵合作协议书
- 资产保值增值管理合同(2篇)
- 生产安全教育的实践与效果评估
- 科技助力推动传统手工艺产业升级
- 石墨产业链的技术创新与产业发展关系
- 石墨材料在医疗行业的前沿应用及市场前景
- 科技企业的创新与大数据驱动的决策支持
- 起重机械指挥特种作业证过关测验练习题带答案
- 2025年化工原料订购与销售合同示范文本
- 2025年办公场所装饰合同范文
- 部编版《语文》(八年级-下册)第一单元教材分析与教学建议
- 现代企业服务营销的创新与实践
- 5.2 做自强不息的中国人 (课件)-2024-2025学年统编版道德与法治七年级下册
- 【寒假开学第一课】AI时代做自己的哪吒
- CWAN 0043-2021搅拌摩擦焊搅拌头设计及制造标准
- 教学课件:《公共关系学》(本科)
- 刘聪版在灿烂阳光下钢琴伴奏谱简谱版
- 2025年春新人教PEP版英语三年级下册全册教学课件
- 建筑工程项目精益建造实施计划书
- 化学-江苏省苏州市2024-2025学年2025届高三第一学期学业期末质量阳光指标调研卷试题和答案
- 游戏开发公司游戏产品设计与用户体验优化计划
评论
0/150
提交评论