Challenges While Converting from Blaise 4 to 5Joost Huurman, Statistics Netherlands
Since 2015 Statistics Netherlands is modernizing Data Collection (Phoenix Programme). The transformation from Blaise 4 to Blaise 5 is one of the main challenges. I would like to organize a workshop (so not a presentation) where we can address the issues arising from the large scale conversion of questionnaires, upgrading the technical platform and still “keep the shop open".
Migrating from Blaise 4 to Blaise 5 - The AHS ExperienceRoberto Picha and Richard Squires, U.S. Census Bureau
The Authoring Team at the U.S. Census Bureaus is currently reviewing the level of effort required to convert an existing production survey from Blaise 4 to Blaise 5. For this task, the American Housing Survey was chosen, converting an instrument from Blaise 4.8.4. Build 1861 to Blaise 5.3.0 Build 1487.
As part of this review, we are evaluating the conversion process, modifications required to Blaise 5 code, adjustments required to Blaise 5 LAYOUT in order to present screens as currently designed, and issues encountered while deploying to a server and launching the instrument as a web based survey.
This paper will discuss the challenges encountered during the full conversion. It will review some of the shortcomings we discovered in our Blaise 4 code that made the migration process very challenging and how we had to deal with them in Blaise 5. Additionally, the paper will address issues with some of the templates we used during the conversion as well as other challenges encountered in the full conversion.
This paper will also present what we consider to be our “working" Master Template, the adjustments made to a few stock Blaise 5 templates, and our survey specific built templates that can be used to enhance the data collection process for our surveys.
Rick Dulaney and G. J. Boris Allan, Westat
Blaise 5 has been engineered with web data collection in mind, reflecting the general trend in survey research. Some large surveys are currently CAPI and have a large, entrenched CAPI field force, and these staff members are trained to use the keyboard for CAPI data collection. The default Blaise 5 screen presentation is a significant change for interviewers who are used to keyboarding. Particularly in categorical questions, the Blaise 5 default screen presentation requires interviewers to use a pointing device or to touch the screen with finger or stylus in order to make a selection. Many organizations will face a choice: accept the web presentation style and retrain the interviewers, or remain in Blaise 4 but lose many of the benefits of modernization.
In this paper we present some of the key challenges of shifting from to web-based data collection in some detail, and will demonstrate some alternate presentation formats in Blaise 5 that may prove useful using Blaise 5 in CAPI.
R. Suresh, Keith Bajura, Matt Boyce, Lilia Filippenko, Preethi Jayaram, Joe Nofziger, Gil Rodriguez, Jean Robinson, and Vorapranee Wickelgren, RTI International
RTI uses Statistics Netherlands' Blaise (Version 4.8) software for several critical survey projects. Given that Blaise 5 is a significant redesign, offering new capabilities for web and mobile, we did a very careful evaluation of its features, and adapted our case management systems for CAPI and mobile to work with Blaise 5. This presentation will describe our journey in incorporating Blaise 5 into our survey software toolkit and our experience in using the newer features offered by Blaise 5.
Karen Brenner, John Barbee, and Richard Frey, Westat
The ultimate assurance of usability and inclusivity across any system is accessibility. This paper will discuss creating accessible surveys with Blaise, and will explain some of the necessary steps a developer needs to take to ensure Section 508/WCAG 2 compliant surveys.
Mecene Desormice, Michael Mangiapane, Erin Slyne, and Richard Squires, U.S. Census Bureau
When the U.S. Census Bureau migrated its survey data collection instruments to Blaise 4, screen standards were developed so that all interviewers would have a consistent experience as they conduct a survey. In recent years, it became a priority for the Census Bureau to make surveys more accessible to interviewers who use software and hardware that assists in reading the text presented on their screens. To accomplish this, a team of Blaise programmers took on the task of evaluating how visually impaired interviewers use our CATI survey instruments. These interviewers typically use a screen-reader software known as JAWS with a refreshable Braille display.
This paper will discuss the initial feedback we received from our CATI staff on how our survey instruments currently work with JAWS, including examples of screens that JAWS does not read accurately and screens that are difficult for the interviewers to navigate. It will describe the modifications we made to our systems, our instruments, and JAWS settings to help accommodate the needs of our interviewers. In addition, this paper will discuss some of the challenges that we face including how JAWS reads look-up tables and edit checks, the need for clarifying interviewer instructions, and enhancing our instruments so both visually impaired and non-visually impaired interviewers can use them concurrently.
Todd Flannery and Ed Dolbow, Westat
As instruments grow in complexity, difficulties arise in testing routes due to the sheer number of combinations of data items and the relationships among those items. Effectively, these phenomena limit the efficiency of prescribed scenario testing, in that only those planned scenarios can be developed and tested, and this results in potential gaps in coverage regarding some novel combinations of data items. Although automated regression testing can generate data for targeted paths, some problematic paths in the routing could be unaccounted for in the planning of the tests. Data emulation of many records can assist in producing a test base that is more comprehensive and thus more representative of possible combinations that can occur in a field study, however unlikely.
The paper will present an approach implemented on a large national longitudinal study to test widely varied routes through a complex Blaise instrument by using the multi-threaded emulator tool or alternative processes to produce large datasets of randomly generated values. By implementing a multi-step process, these records are tailored according to the project needs to be able to approach a comprehensive set of outcome scenarios. Analysis of this data helps in quality control of Blaise coding and in determining data outliers before they arise in a real-world scenario.
Rod Furey, Statistics Netherlands
One of the things that people always ask is, “How can I best test my questionnaire?" There are various things that can be written to aid the questionnaire designer in their testing efforts. One of the items that comes up is how to test standard blocks that are included in various instruments. Another is how to test the blocks inside an instrument in a stand-alone way. One way of achieving this testing is to create a simple, enclosing datamodel which contains a block instance and run that through a testing regime. Given the correct circumstances, it should be possible to generate such a datamodel automatically. This approach can be expanded with a few user-friendly items for the end-user to aid testing. If a program is available to generate test data, the combination of these two programs, along with a simple way of saving the route in a form that can be used in a comparison program, would allow the end-user to bulk generate testsets and analyze any changes that may be made to the route by small changes in data values.
Jan Haslund and Trond Båshus, Statistics Norway
This paper describes our experiences with Blaise 5 and multi-mode surveys and specifically the CATI-functionality in Blaise 5.
We have done CAWI interviewing in Blaise 5 since 2014, but haven't tried multi-mode or CATI interviewing in Blaise 5 until 2018. This year we have used the CATI-functionality in Blaise 5 for the first time and also explored the use of true multi-mode questionnaires CATI, having previously used a combination of Blaise 4.8 for CATI and Blaise 5 for CAWI. The survey we started with was “Government and citizenship" (GOVCIT), a multi-mode and multi-language survey. The survey started as a CAWI only survey, with CATI follow up after two weeks. We also wanted to test the possibility to conduct CATI and CAWI at the same time.
The next multi-mode survey in Blaise 5 is the Labour Force Survey (LFS). We used the framework built for GOVCIT and refined it further. This is a pilot with CATI in the first wave and either CAWI or CATI follow up. The respondents will have an option to answer on CAWI if they fulfill certain conditions for the following waves. This will work together with our case management system (SIV). We will have one questionnaire for both CAWI and CATI using the same datamodel and the same database. Text roles are used to define mode specific texts. The layout for different modes is individually modified for CATI (desktop and laptop), CAWI (mobile browser) and other browsers (desktop, laptop, tablet).
David Kinnear, Office for National Statistics, UK
Earlier this year, the Office had to implement a solution to deliver a pilot mixed-mode Opinions survey using web and telephone. Currently the Opinions CATI survey is conducted using Blaise 4.8. The corporate systems that are currently being delivered were not in place to deliver this. To deliver a quick solution, it was decided that Blaise 4.8 would be used for telephone interviewing, and Blaise 5 for web collection.
Implementing a web questionnaire posed significant challenges. The infrastructure had to be in place to host the questionnaire, and merge real-time web data with the live CATI database. A new Manipula system was developed to regularly update the Blaise CATI database with the latest web data. This was crucial to the telephone interviewers having access to the latest web data in case a respondent requested to switch modes. Additionally, the web questionnaire was a different structure to the existing interviewer-led questionnaire, to reduce the length of time a respondent would potentially have to spend completing a survey.
Government Digital Standards, designed to promote a common approach across all government websites, influenced the layout of the Blaise 5 web questionnaire and pushed the boundaries of our knowledge in what was achievable using the Blaise resource database. Issues were found when trying to implement more complex templates. Some were down to user inexperience, and others down to software issues. These were quickly resolved with support provided by the Blaise team.
The results of the mixed-mode pilot revealed an encouraging response from those completing on the web, and contributed to a higher overall response rate for the Opinions when compared with previous collection modes.
Lilia Filippenko, Preethi Jayaram, Joe Nofziger, Brandon Peele, and R. Suresh, RTI International
For many years RTI's Integrated Field Management System (IFMS) is a standard system to use for CAPI projects on laptops for instruments developed in Blaise and other CAI software used by RTI. The laptop Case Management System (CMS) is configured to launch the specific CAI software as required by the project. Since Blaise 5 uses a different approach compared to Blaise 4 to install and launch Blaise instruments on laptops, we needed to adapt our CMS to support Blaise 5. We tweaked our CMS and upgraded the associated Manipula scripts, and now RTI's IFMS can support Blaise 4, Blaise 5 and other CAI software as needed. The paper will describe the process we followed and the challenges we encountered during this development.
Marsha Skoman, University of Michigan
This session will demonstrate one approach to conducting offline Blaise 5 surveys on Windows laptops in the field. The Survey Research Center uses SurveyTrak, one of its sample management systems, to create cases on the fly in the field and to conduct Blaise 5 surveys for those cases. This approach is currently being used for The Health and Retirement Study (HRS), a panel study that surveys more than 20,000 respondents.
To be discussed: using Manipula and standalone DEP, configuring the sample management system, configuring the laptop including how to accommodate multiple studies using different surveys and different versions of Blaise, data model migration on the laptop, and using the Blaise 5 API to bring survey data into the sample management system.
Mark Pierzchala, MMP Survey Services
From October 2017 through July 2018 a group of BCLUB members worked on a high-level description of Blaise 5 multimode management. The contributing organizations included the UK Office for National Statistics, RTI International (US), UK National Centre for Social Research, Statistics Norway, University of Michigan (US), Westat (US), Social and Scientific Systems (US), and Statistics Netherlands as the software producer. Statistics Denmark and the National Agricultural Statistics Service (US) also contributed by answering some questions on their use of Blaise 4 CATI. There were 3 rounds of questions and answers on the major survey modes including CATI, CAPI, paper, Web, and surveys on devices. This was followed by a draft report, then a revised final report to Team Blaise completed in early July 2018.
This presentation will focus on the major areas of agreement among the contributors. The big challenge is to define enough Blaise 5 multimode survey management functionality to be useful, while at the same time leaving flexibility for organizations to implement their own requirements. For example, while there should be a model outcome coding scheme, it should be very easy for Blaise 5 users to easily substitute their own coding schemes. The high-level recommendations of the BCLUB Multimode Management Group will be presented.
G. J. Boris Allan, Westat
We examine two distinct areas where the Blaise 5 ROLES feature can expand the range of possible presentations of information: (1) extending the scope of question displays by adding features to the resource database (the BLRD file) that are context sensitive to the question under consideration as defined by Roles; and (2) annotating interviews beyond basic remarks that are simply String types.
With regard to adding features to question displays we will discuss: using alternative data-collection templates other than the default template for the current datatype, such as using a DropDownList to collect categorical data; allowing varying shapes of displays for standard templates with specified defaults by choosing (say) non-default heights and widths for particular questions; and extending the definitions of input type beyond the normal for default templates, such as adding (say) the label "square meters" (or some other chosen description) after an number input box. We will also discuss Remarks, defined as Field Properties, and how their use can be controlled by Roles. This includes: giving measures of importance (field-property enumerations) attributed by interviewers themselves to particular interviewer text remarks; using varying enumerations for different types of question and for different types of text remark; and allowing remarks (and types of remark) for specified questions only and not allowing remarks for any other questions.
Michelle Amsbary, Rick Dulaney, Justin Kamens, Westat and Joelle Michaels, U.S. Energy Information Administration
The Commercial Buildings Energy Consumption Survey (CBECS) periodically collects energy consumption and expenditure data from US buildings. For several data collection cycles, dating back to 1999, CBECS has used Blaise for data collection. Beginning with the 2018 CBECS, the goal is to encourage respondents to complete their questionnaires on the web, which naturally suggests a move to Blaise 5. However, during the course of gathering requirements, the U.S. Energy Information Administration (EIA) and Westat identified several challenges around the conversion of a CAPI instrument to web. These challenges fall into two categories. First is the general look and feel of the instrument on the web: how to define elements of basic screen presentation, such as layout, font size, emphasis, branding, and color palette? How to handle graphics in question text or responses? How to position navigation buttons, and how to accommodate missing responses? The second category includes items more specific to fielding CBECS, such as the use of help screens, show cards and optional text.
EIA and Westat initiated the development of best practices for web screen presentation on CBECS. We held seminars with methodologists, conducted a literature review, and surveyed publicly available common screen presentations, and from these activities we developed and refined a basic screen presentation template. We then identified representative CBECS questionnaire items for all item types and special situations. In addition to the core question types generally used throughout surveys – categorical, continuous, string, select all that apply, etc. – we also identified more complex types such as date pickers, grids, and lookups. We programmed these items in Blaise 5 for the web and refined the presentation through multiple review cycles, resulting in a web screen presentation that works well for the CBECS and generally for web data collection. In this presentation we will demonstrate key aspects of the CBECS web screen presentation.
Karl Dinkelmann, Shane Empie, Rebecca Gatward, Lisa Holland, and Andrew Hupp, University of Michigan
Survey Research Center (at the University of Michigan) last updated their CAI screen design guidelines in 2008. As the Center began to program new survey questionnaires using Blaise 5 and transition established surveys, it became apparent that this was an ideal time to update the guidelines and accompanying library of screen templates.
This paper will describe the process we followed to review and update the guidelines and develop the accompanying screen templates for interviewer administered modes - including decision making from a technical and design perspective and lessons learned. This paper may be helpful to other organizations that are about to start or are in the process of updating their own screen guidelines.
Rogier Hellenbrand, Statistics Netherlands
Statistics Netherlands uses Blaise 5.3 as its new main source for CAWI and CAWI is the preferred mode of interviewing. After a brief description of the way we operate Blaise questionnaires within our IT domain we would like to demonstrate how we have incorporated several (new) Blaise features, e.g.: using Blaise events to promote response from the interviewing domain to the statistics department, and changing Questionnaire templates in production (textual changes only) without disrupting the interviewing process.
Finally we would like to discuss the challenges we faced during our load and performance testing on our Blaise 5.3 CAWI environment.
Leif Bochis Madsen, Statistics Denmark
Statistics Denmark has for more than ten years used Microsoft Infopath as tool for development of web questionnaires for our Business Survey Platform. Because this tool is going to be discontinued in a few years, we have considered alternative tools and settled for Blaise 5 as the main tool for developing web questionnaires in the coming years.
However, the entire Business Survey Platform consists also of a number of frameworks and systems in order to support automated procedures and effective work procedures in the data collection process. Implementing Blaise 5 as tool for web questionnaire development therefore implies a range of adaptations in order to support exchange of data as well as metadata between the various sub systems.
Important parts comprise exchange of data to and from our Business survey data store (XIS = Xml-based Input System), incorporation of Blaise questionnaires into our Business survey portal (VIRK) and our backend survey administration system (IBS).
The work has implied a large number of decisions with respect to details of communication between the various parts, i.e. which specific constructs should be needed to transfer data and metadata to and from the Blaise questionnaire. As a result, a basic template for questionnaires has been developed alongside a standard resource database including – beside layout standards – also constructs supporting the communication between Blaise and our backend system.
In order to make it possible to move a portfolio of approx. 65 questionnaires to Blaise in a few years, we also need to consider possible ways to auto-generate Blaise code from existing metadata in various formats.
Rhonda Ash, Karl Dinkelmann, Shane Empie, Rebecca Gatward, Andrew Hupp, Jason Ostergren, James Rodgers, Marsha Skoman, Rhymney Weidner, and Laura Yoder, University of Michigan
The Health and Retirement Study (HRS) is a longitudinal panel study that surveys a representative sample of more than 20,000 people in America. Supported by the National Institute on Aging (NIA) and the Social Security Administration, the HRS explores the changes in employment status and the health transitions that individuals undergo toward the end of their work lives and in the years that follow.
In 2018 the HRS introduced web as mode of completion. Development work for this has taken place over the last four to five years – which included transitioning the survey to Blaise 5 and redesigning the questionnaire for mixed mode data collection.
HRS will be used as a case study to describe the processes involved in transitioning an established and complex survey to Blaise 5. We will describe how we adapted the tools and systems involved in the major phases of the survey process to Blaise 5, for example, data collection, sample management, data processing and delivery. We will also focus on specific components or conversion of specific questions within the interview.
The format of the session will be presentations of full and 'mini' papers and more informal experience and information sharing and Q&A. The session will be documented as one paper for the proceedings.
Jeldrik Bakker and Harry Wijnhoven, Statistics Netherlands
It has been two years since we first presented our overview of the do's, don'ts and the don't knows about questionnaire design for smartphones. In this updated presentation we will first of all explain why we really need surveys to be compatible for smartphones. Ignoring or blocking smartphone respondents wasn't a good option 2 years ago and is even worse today. Secondly we'll go over the basic elements which are needed to make a survey mobile friendly, showing what to do and what to watch out for. Finally current and future research is presented ranging from very practical topics like how to use auto-forward functionality to a state of the art prototype for a virtual reality survey.
Kathleen O'Reagan and Nikki Brown, Westat
A concurrent-mode mail and web survey was implemented in Blaise 5. The design included mail out of a paper survey to all respondents who were given the alternative to use the web if they so preferred. For this study it was important to optimize the online layout to support multi-device use. Blaise 5 enabled respondents to use iOS, Android and Windows devices with varying screen sizes, browsers and other specific controls. The challenge of designing a web interface for respondents who may also be referring to accompanying hard copy when completing a survey had to be considered in the layout design for Blaise 5. Many of the questions had been designed for larger screens and tabular presentation and required adaptation. Multiple layouts had to be created to accommodate varying screen sizes and the presentation varied between the DEP, browser, tablet and smartphone. The layout issues involved color, design, images, and interactive content as well as the presentation of the information to be usable and readable. This paper discusses how the survey was managed and design changes needed to maximize the flexibility of Blaise 5 such as allowing respondents to use texting to initiate the survey on their personal devices.
Bryan Bungardt, Statistics Netherlands
Statistics Netherlands would like to show some of the features that Blaise offers and that we have implemented in our CAWI questionnaires.
We will do this by login into some of our social and business and show some of these features live on stage. Features that we currently have in production are: video, multi-language, Google search implementation, multi-layout (Smartphone), and client-side calculations.
Petri Godenhjelm, Pyry Keinonen, and Anna Niemela, Statistics Finland
This paper discusses the experiences gained from the designing and testing the Blaise 5 household questionnaire on the EU Survey on Income and Living Conditions. Statistics Finland is implementing web mode as a one mode in mixed-mode survey designs and especially mobile first principles is followed on survey design.
In 2017-2018 we have been developing mobile-layout and re-thinking ways to present grid questions in questionnaires. Our aim is to improve user experience and usability of web-questionnaires regardless of what type of device the respondent uses. Responsive web design is an ongoing trend that guides our Blaise-development heavily. One of our goals is to make a fully responsive layout for Blaise web-surveys and get rid of multiple different layouts for multiple devices.
During the design process several new design features of Blaise 5 was learned and tested. The main task done so far is the scalability of Blaise 5-layout for the different size of screens. Also we have examined ways to break down grid questions to single questions without increasing response burden and using buttons as much as possible instead of response fields that require typing. The usability testing was also done through cognitive interviews with the concurrent think-aloud method and recording of all the test interviews. This video material and usability guidelines gave the path for design choices during the re-design work of the web questionnaire.
The most challenging part has been to adapt for the implementation of the Blaise 5 version changes. On the other hand the deep knowledge of Blaise has developed during this process. And at the same time the new processes has been developed to optimally utilize Blaise 5 in the whole organization, especially in the mixed-mode design. The solutions designed for the formation of household can also be used in other questionnaires in future.
Ole Mussmann and Harry Wijnhoven, Statistics Netherlands
The habits of society are changing. We use different devices in our lives to communicate with the world. Our relationship with data changes: instead of only consuming or giving, we expect a dialogue. People want to use as minimal effort as possible and want to be kept engaged. These trends greatly affect the quality of data when using people as a source. How can we keep the respondents' attention? How can we minimize their burden?
Statistics Netherlands will report on new developments in data collection. What are short term approaches? How do we plan for the time to come? Let's find out how to bring statistics into the future.
Andrew Piskorowski, Mark Simonson, and Laura Yoder, University of Michigan
Paradata that are captured during the survey process are a valuable source of information in helping us understand and improve the data collection process.
Paradata which are linked directly to the administration of a survey instrument are collected automatically through the Blaise software (i.e., audit trail). The ADT file from Blaise 4 has been valuable in understanding interviewer behavior. With Blaise 5, we have been able to widen the collection of paradata to include behavior on web-SAQ (self-administered questionnaires) and/or mixed mode projects (i.e., interviewer and web-SAQ combined).
The main focus of this paper will be to share the results of a utility to automatically parse these sources of paradata from Blaise 4 and Blaise 5 into usable tables for analysis, reporting and quality control. The data from each version can be stored together and used in conjunction with other systems like time keeping, expense, survey data, and sample management. This paper will identify and demonstrate:
In addition, reporting tools such as SSRS (SQL Server Reporting Services), Excel, and Power BI are used to distribute the data to various user groups (e.g., PIs, production managers, statisticians, etc.). The resulting output is also available in a SQL database and can be accessed using other reporting and analysis tools. The transformation techniques and standard paradata reports can be implemented by any user of Blaise 5 paradata to enhance the use of these data.
Peter Stegehuis, Westat
Interviewer comments, or remarks in Blaise, are a useful way for field interviewers to record information that may be necessary for the data editing/cleaning stage. Comments may be made when interviewers are not sure what to do with information provided by the respondent. Also, during long and complicated interviews it may not always be feasible to back up to the question for which the answer may need to be changed, so a comment might be the only way to relate important details.
Triaging all the Blaise remarks can be a very time-consuming and costly task. A Blaise remark is just a text string, so there is no structure or categorization possible. Reading every single remark is the only way to determine whether or not the remark contains actionable information. Prioritizing, or even only dealing with, certain categories of comments is virtually impossible, as categorization cannot happen without reading the comments in the first place.
This paper shows a solution we have implemented in Blaise 4.8, using the integrated functionality of the DEP and Manipula. This approach keeps the strong points of Blaise remarks - the interviewer can add or edit a remark at any question, and a paperclip icon shows the existence of a remark. But it adds a new element: first the interviewer has to select a category before adding the comment itself. The comments get stored in their own data file, with the comment category as a separate variable.
Max Malhotra, University of Michigan
Using Blaise 5 for interviewer-administrated surveys pose unique technical challenges, one of these technical challenges includes training interviewers in a clear and concise manor where they can be brought up to speed on the unique aspects of each data model in the shortest time possible. What we as an organization have observed is that one effective mechanism for training is a round robin implementation of a data model script where interviewers go in a circle participating in different roles in which they would verbalize the questions and possible response options to simulate a genuine interviewer administrated session, and ultimately build their confidence level through the use of this activity.
This can be technically challenging as some data models are rapidly changing during the development stage and a script can consist of hundreds of pages and there was no out of the box solution provided for script generation, thus we built a utility in house to handle Blaise 5 script generation, here we will talk about some of the technical components behind the development of this application which utilized a number of Blaise APIs such as Meta, DataRecord, DataLink, and SessionData as well as numerous other components to bring this project together.
Jeremy Iverson and Dan Smith, Colectica
Blaise Colectica Questionnaires allows survey researchers to build surveys faster, to leverage the DDI metadata standard, and to generate rich documentation and reports. The software improves transparency into the data capture process.
The tool offers an intuitive survey design surface and questionnaire palette, allowing survey designers to build questionnaires without learning a domain specific language. Questions, blocks, and logic can be created within the program or reused from question bank powered by DDI. Reusing standardized questions assists in creating more comparable data.
The third release, launching in October 2018, includes support for grids, rosters, dynamic text, and text formatting. Also featured are collaboration improvements and flowchart export.
The software stores questionnaire specifications using the open DDI and GSIM standards, and can connect to metadata repositories and question banks powered by Colectica software. Data descriptions can be linked with source questions, creating harmonized data and showing data lineages.
Surveys designed with this tool can be fielded using Blaise 5 on the desktop, on the Web, and on mobile devices. The tool converts the DDI metadata into a Blaise project and source code. Changes to surveys made with the tool can be published and executed within the Blaise environment, allowing rapid iteration while developing surveys.