Microsoft Word Packard Teaching Case revised docx



Yüklə 480,23 Kb.
Pdf görüntüsü
səhifə11/15
tarix08.08.2018
ölçüsü480,23 Kb.
#61357
1   ...   7   8   9   10   11   12   13   14   15

Teaching  Case:  Evaluation  of  Preschool  for  California’s  Children

 

 



21  

We  thought  they  would  help  with  midcourse  corrections.  They  did  not.  For  me,  real-­‐time  evaluation  

is  my  daily  or  weekly  picking  up  the  phone  and  talking  to  grantees.  The  reports  I  got  from  the  

evaluators  were  interesting  in  confirming  what  I  was  seeing  but  not  in  making  corrections.”  

 

He  added,  “I  do  think  real-­‐time  evaluation  is  possible.  I  don’t  think  this  evaluation  necessarily  



cracked  it.  We  got  too  laden  down  with  too  many  pieces.  It  gave  us  a  glimmer  that  rather  than  

looking  back,  how  do  we  look  forward.  And  that  was  helpful.”  

 

Reich  said,  “The  notion  of  real  time  is  always  a  challenge.  I’m  not  sure  how  real  time  we  have  



succeeded  in  this  evaluation.  We  have  been  more  successful  in  putting  in  meaningful  systems  for  

tracking  policy  and  advocacy.”  



 

PHASE  2:  After  Failure  of  Ballot,  Strategy  Shifts  Focus  

 

The  defeat  of  Proposition  82  marked  another  turning  point  for  the  Packard  work  and  the  

evaluation.  From  almost  the  start  of  the  preschool  subprogram,  much  of  Preschool  California  and  

other  key  grantees’  focus  had  been  on  generating  preschool  supporters  among  California  voters,  

with  a  special  emphasis  on  key  constituencies.    

 

“When  the  ballot  initiative  failed,  everyone  had  to  regroup,”  Salisbury  said.  “It  was  a  bad  defeat  at  



the  ballot  box…  Mounting  another  ballot  measure  was  highly  unlikely  after  such  a  strong  defeat.  The  

only  arena  that  had  any  possibility  to  move  forward  was  either  at  the  local  level  or  the  state  

legislature.”  

 

“The  ballot  initiative  posed  a  significant  challenge  for  us  and  for  our  grantees,”  Reich  added.  “How  



should  we  engage  with  it?  How  should  we  respond  to  it?  And  then  it  lost  and  it  lost  big.  We  were  

really  at  a  major  strategic  inflection  point.  How  could  we  continue  this  work  when  the  issue  was  so  

thoroughly  trounced  at  the  polls?  It  led  to  a  year  or  so  of  soul  searching.  Our  grantees  had  invested  

so  much  in  a  ballot  focus  strategy  and  the  legislative  strategy  had  been  basically  ignored.  They  had  

few  relationships  with  legislators  and  no  relationships  with  the  Governor.  If  you  were  going  to  switch  

to  an  incremental  legislative  approach  with  the  preschool,  there  was  just  none.  We  didn’t  have  

metrics  from  standard  policy  tracking  and  the  bellwether  to  tell  us  how  we  were  doing  legislatively.”  

 

As  the  Packard  team  re-­‐grouped,  there  was  less  for  the  evaluators  to  do.  “They  were  trying  to  figure  



out  what  they  were  going  to  do  different,”  Coffman  said.  “We  did  much  less.  Sometimes  [with  this  

approach]  there  are  periods  when  nothing  happens  and  sometimes  there  are  periods  when  a  ton  is  

going  on.  If  there  is  not  an  opportunity  for  learning,  there  is  no  reason  to  collect  data.”  

 

This  ebb  and  flow  in  the  evaluation  work  raised  a  larger  question  for  Coffman.  What  should  

evaluators  do  during  periods  when  strategy  is  changing  or  uncertain  or  not  fully  formed?  “If  the  

strategy  is  not  yet  in  place  to  track  and  learn  from,  then  what  should  we  be  doing?”  Coffman  asked.  

“What  is  our  role?”  

 

The  Foundation  Looks  for  an  “Early  Warning  System”  

 



Teaching  Case:  Evaluation  of  Preschool  for  California’s  Children

 

 



22  

The  ballot  initiative’s  overwhelming  defeat  came  as  a  surprise  to  many.  Berkowitz  said  that  period  

of  reflection  after  the  ballot  initiative’s  defeat  also  led  Packard  staff  to  think  about  ways  to  get  a  

stronger  read  on  the  political  context  and  gain  a  consensus  of  what  was  happening  in  the  landscape.  

 

 “It  was  influenced  by  the  ballot  initiative  failing,”  she  said.  “Can  we  have  a  better  early  warning  



system?  What  would  that  take?    

 

The  grantees  weren’t  necessarily  that  early  warning  system.  Was  there  a  way  to  understand  



where  legislators  and  decision  makers  stood  on  issues?”    

 

The  bellwether  interviews  provided  helpful  information  from  influential  leaders  in  California.  But  



they  didn’t  give  Packard  or  its  grantees  a  read  on  the  people  who  now  held  the  key  to  success  in  

achieving  universal  preschool—state  legislators  and  local  officials.  

 

Salisbury  and  Packard  staff  were  well  aware  of  legislator  “report  cards”  on  specific  issues,  but  as  



Salisbury  said,  “In  my  experience,  they  were  one  trick  ponies.  I  didn’t  think  they  had  legs.”    

 

To  help  Packard  get  the  information  they  were  seeking,  HFRP  began  working  closely  with  



Preschool  California  staff  to  develop  a  policymaker  rating  tool  that  would  assist  Preschool  

California  staff  in  doing  their  job  more  effectively.  

 

Coffman  remembers  an  early  conference  call  with  Packard  and  Preschool  California  staff  about  



developing  a  policymaker  tracking  system.  

 

“The  staff  from  Preschool  California  said  ‘this  could  be  really  burdensome,  this  could  be  a  nightmare  



for  us.  Please  involve  us  in  developing  something  that  could  be  helpful  for  us.’  They  were  worried  

that  we  would  come  up  with  something  that  wouldn’t  be  relevant,”  Coffman  said.  

 

According  to  Atkin,  Preschool  California  staff  did  not  want  the  policymaker  ratings  to  push  them  to  



change  their  approach  in  ways  that  were  not  appropriate,  based  on  their  strategy  and  experience.  

 

“I  said,  ‘please  don’t  have  us  measure  something  that  not  only  takes  time  but  more  important,  



creates  incentives  for  us  to  do  work  in  ways  that  don’t  make  sense  because  we  have  to  be  measured  

on  it,’”  she  said.  “We  don’t  want  to  measure  people  who  we  are  not  trying  to  effect.  For  example,  

we  don’t  want  to  have  to  meet  with  a  mayor  if  the  mayor  is  not  going  to  affect  preschool.”  

 

As  HFRP,  Packard  Foundation  staff,  and  Preschool  California  staff  worked  to  develop  the  policymaker  



rating  method  the  goal  was  two-­‐fold:  1)  gather  meaningful  data  on  policymaker  support  while  also  

2)  making  sure  that  the  approach  would  not  add  unnecessary  grantee  data  collection  burden.  

In  their  conversations,  Preschool  California  outlined  the  process  that  they  had  to  undertake  already  

to  keep  track  of  legislators’  stance  on  universal  preschool.  With  that  information,  and  in  close  

consultation  with  Preschool  California,  HFRP  designed  a  process  to  make  the  tracking  that  Preschool  

California  already  did  more  systematic  and  comprehensive.    

 

A  New  Policymaker  Rating  Tool  Allows  a  “Thoughtful  Conversation”  

 



Yüklə 480,23 Kb.

Dostları ilə paylaş:
1   ...   7   8   9   10   11   12   13   14   15




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə