Microsoft Word Packard Teaching Case revised docx



Yüklə 480,23 Kb.
Pdf görüntüsü
səhifə7/15
tarix08.08.2018
ölçüsü480,23 Kb.
#61357
1   2   3   4   5   6   7   8   9   10   ...   15

Teaching  Case:  Evaluation  of  Preschool  for  California’s  Children

 

 



13  

 

A  related  issue  is  the  intended  users  for  an  evaluation  focused  on  strategic  learning.  In  this  case,  



HFRP  said  that  they  would  have  three  user  groups—the  Foundation,  the  Trustees,  and  the  grantees.    

 

“The  problem  was  that  we  couldn’t  just  pass  on  our  reports  that  were  written  for  the  Foundation  



staff  to  the  grantees  and  expect  them  to  find  value,”  Coffman  said.  “They  weren’t  targeted  to  them.  

For  me,  I  am  still  struggling  with  the  question  of  who  are  the  appropriate  intended  users  with  an  

evaluation  like  this,  and  can  you  serve  more  than  one  simultaneously?”  

 

Concerns  about  Cost  Emerge  (Dollars  and  Time)  

 

While  Reich  and  her  colleagues  believed  that  the  HFRP  evaluators’  approach  to  strategic  learning  

made  sense,  she  thought  it  would  bring  its  own  set  of  challenges.  One  was  cost.    

 

At  $300,000  to  $350,000  a  year,  it  was  among  Packard’s  more  expensive  evaluations.  When  asked  

about  the  key  challenges  of  this  approach,  Salisbury,  Reich  and  Jeff  Sunshine,  another  Packard  

program  officer  who  joined  in  2007,  all  said,  among  other  things:  “It’s  expensive.”  

 

Reich  said,  “If  we  look  at  what  was  spent  on  the  Preschool  budget  [for  evaluation]  compared  to  



other  grantmaking  programs  at  the  Foundation,  it  was  relatively  high—one  of  the  highest.  You  

always  wonder  if  the  outcomes  justify  the  expense.  It’s  always  a  bit  of  a  nagging  concern.  We  are  

spending  a  lot  of  money  that  is  primarily  benefiting  us  and  a  couple  of  grantees.”  

 

Meera  Mani,  another  program  officer  in  the  Children,  Families  and  Communities  program,  however,  



disagreed.  “We  are  making  something  like  a  $7.5  million  investment  a  year  and  we’re  spending  

$300,000  on  evaluation.  That’s  not  a  huge  investment  for  the  depth  of  this  evaluation.  I’m  someone  

who  really  believes  in  good  evaluation  and  continuous  improvement.  To  some  extent  this  is  a  

formative  and  a  summative  evaluation  in  one.  That  is  a  tough  balance  to  reach.”  

 

For  their  part,  the  evaluators  say  that  real-­‐time  requires  real  resources.  “Real-­‐time  evaluation  is  

not  a  process  that  can  be  done  when  the  evaluation  is  tightly  budgeted  and  resourced,”  Weiss  said.  

“Evaluators  need  to  have  sufficient  resources  to  be  flexible  and  responsive.  We’ve  found  that  

sufficient  resources  are  necessary  to  avoid  being  overly  ‘contract  bound’  and  to  avoid  the  kind  of  

nickel-­‐and-­‐diming  that  can  erode  relationships  and  products.”    

 

Coffman  added,  “An  evaluator  using  this  approach  has  to  be  flexible.  Plans  can  change.  You  cannot  



predict  when  the  foundation  will  need  something.  If  they  need  something,  they  need  it  fast.  It’s  

almost  like  you  need  to  have  an  evaluator  on  a  retainer.”  

 

Dollars  for  the  evaluation  are  not  the  only  costs  required  for  strategic  learning  to  work.  

Foundation  staff  must  put  in  time  and  attention,  collaborating  closely  with  evaluators  on  the  design  

of  the  evaluation  as  well  as  reacting  to  data,  learning  from  it,  and  applying  it  as  appropriate.    

 

“This  is  a  more  labor  intensive  approach  [for  program  officers]  than  traditional  evaluation,”  Reich  



said.  “You  have  more  day  to  day  interaction  with  the  evaluators  and  you  need  to  engage  with  the  

results.  It’s  a  significant  investment  of  our  thought  and  time….The  Board  also  has  to  engage  in  a  

deeper  level  than  they  are  used  to  in  order  to  make  an  evaluation  like  this  a  success.”    



Teaching  Case:  Evaluation  of  Preschool  for  California’s  Children

 

 



14  

 

Gale  Berkowitz,  the  director  of  evaluation  at  Packard  at  the  time,  added,  “It  requires  a  lot  from  the  



program  staff.  They  have  to  know  what  they  want  and  articulate  it.  It  requires  time  and  attention  

from  them  to  talk  to  the  evaluators.”  

 

“This  was  not  a  simple  decision  for  Packard,”  according  to  Coffman.  “They  had  a  high  maintenance  



grantmaking  strategy  that  was  expected  to  change  and  evolve  over  time.  They  had  to  be  highly  

engaged  with  their  grantees.  At  the  same  time,  we  were  asking  them  to  be  highly  engaged  with  us.  

There  are  only  so  many  hours  in  the  day,  and  they  had  to  decide  where  to  put  that  time.”  

 

PHASE  1:  Evaluation  Begins;  Ballot  Initiative  Filed  Sooner  Than  Expected    



 

The  evaluation  got  underway  in  2004.  HFRP  

designed  the  evaluation  to  address  four  main  

questions.  These  questions  appear  at  right,  along  

with  the  data  collection  methods  used  to  answer  

them  during  the  course  of  the  evaluation.  Because  

advocacy  and  policy  change  efforts  are  not  easily  

assessed  using  traditional  program  evaluation  

techniques,  the  evaluation  was  methodologically  

innovative  and  included  new  methods  developed  

specifically  for  this  evaluation  (bellwether  

methodology,  policymaker  ratings,  champion  

tracking).  (These  new  methods  are  described  later  

in  the  case.)  

 

The  evaluators  began  with  some  fairly  traditional  



activities.  They  worked  with  Packard  staff  to  refine  

the  logic  model  for  the  program  and  created  a  plan  

that  identified  numerous  indicators  of  progress,  

called  “critical  indicators.”  

 

Unlike  most  evaluations,  however,  which  typically  produce  lengthy  annual  reports  or  summative  

reports,  from  the  start  HFRP  evaluators  planned  to  produce  short  “learning  reports”  about  every  

six  months.  These  reports,  which  would  draw  on  a  variety  of  data  that  evaluators  were  collecting,  

would  provide  a  synthesis  of  findings  and  lessons  learned.  The  reports  were  designed  to  provide  

practical  information  that  the  Foundation  staff  and  Trustees  could  use  in  shaping  its  strategy  for  the  

preschool  subprogram.  The  reports  would  also  be  sent  to  preschool  grantees.    

 

Evaluators  would  follow  up  these  learning  reports  with  learning  meetings  in  which  they  met  with  



program  staff  to  discuss  the  findings  and  implications  of  those  reports.  The  evaluators  developed  the  

agenda  for  these  meetings  and  facilitated  them.    

 

Evaluation  Questions  and  Methods  

Questions  

Methods  

1) Have  preschool  

awareness  and  political  

will  increased?  

• Grantee  Reporting  

• Bellwether  interviews  

• Media  tracking    

• Policymaker  ratings  

• Speech  tracking  

• Champion  tracking  

2) Have  state  preschool  

policies  on  access  or  

quality  changed?  

• Grantee  Reporting  

• Policy  tracking  

3) Have  preschool  access  

or  quality  improved?    

• External  data  source  

tracking  

4) What  is  the  likelihood  

for  future  policy  

progress  on  preschool?  

• Bellwether  interviews  



Yüklə 480,23 Kb.

Dostları ilə paylaş:
1   2   3   4   5   6   7   8   9   10   ...   15




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə