amazonv: (Default)
[personal profile] amazonv
Accessibility Testing Coverage: Automation and Intelligent Guided Testing
Type: Breakout
Track: Wildcard
Ever wonder how much accessibility coverage you can actually get from technology like automation and Intelligent Guided Testing? In this session, we’ll review three years of audit data to show the actual accessibility coverage percentages you can get from various technologies.
 
We have two more minutes.  I am still Noah, we have a couple of 
minutes so we will let people continue filtering in and getting settled 
here for Ms. Glenn da taking her through her session on accessibility 
testing coverage.  Excited to have everybody here.  Looking forward to 
this session.  Shout out and thanks to Angie, our captioner.  I 
appreciate you very much and really looking forward to sharing the hour 
with everybody.  
Just another minute here and we will get ready to kick off.  
One more minute and we will get started for everybody.  
 
All right.  We have 231 eastern, to we will kick off.  Hello every one, 
my name is Noah, I am with Deque.  I will be moderating this session 
today, accessibility session cover brought to you by none other than 
Ms. Glenn da.  I will just take care of a few housekeeping items.  
First, today's session is being recorded and will be hosted on demand 
registrants.  If you require live captions for today's session, you may 
access those on the session page just below the stream.  Lastly, we will 
take the last five minutes for Q&A.  I was told earlier by a fellow 
Dequer who uses a screen reader, it is a Q&A link, so just for 
navigation sake.  
>> Thank you so much.  Glad everybody is here.  Let's dive in to 
coverage.  I am Glenda, I would love it if you call me good witch, I am 
a huge fan of wicked.  Up on screen I have images from the screen wick 
ID.  I have had the great honor for working at Deque for ten years and I 
live for this.  
First, let's start with accessibility testing and time.  I really want 
you to think about if you have enough time and resources to do the 
accessibility testing you need to do, whether it is for your own company 
or whether you work in a company like I do where you're helping others, 
is there enough time to do all the things that need to be done when it 
comes to accessibility testing?  And I also want to do a thought 
experiment here, and I would love it if you threw these in to chat of 
how long does it take for you to manually test an average web page.  I 
know that is a loaded question, there is no average page.  But when you 
are testing for WCAG 2.1 A or double A, when you're at the upper level, 
that is 50 different success criteria.  I will guess at what might be 
coming in on chat if anybody is answering it, I have seen numbers or 
heard people say as low as 30 minutes.  And I have seen numbers that are 
three hours, 4 hours, six, eight and larger.  Noah, has anybody fed 
hours in or are they being quiet and shy?  
 
>> We're getting 30 minutes to 2 hours, longer than expected.  A few 
hours, couple hours.  
 
>> Right.  I miss our live audience feedback, so I appreciate that.  Now 
that we have that feedback, how quickly do those test results run?  You 
know what I think?  They start rotten before I finish the test 
sometimes, because somebody is changing the code that I'm literally 
testing.  I can't get them to freeze.  I wish I could get them to 
freeze, but also understand in this dynamic world we're living in that 
is moving at the pace of life, that those manual test results are not 
happening on static pages not changing.  So with this concept of do you 
have enough time, how long does its take, how quickly is it rotting, 
Houston, we have a problem.  And I think we all sense this.  There is 
this strong, strong desire for us to make everything accessible, and yet 
finding the right resources and using them economically and wisely super 
tough.  
So before I give you some of the news, I want you to think about 
some other things and that is what would WCAG success criteria have the 
most issues for you?  Maybe you're like me and you have been at this for 
20 years and you have tested every type under the sun, and you have a 
sense of it is this, this and this and you can spout it out.  I would 
like you to actually jot them down on paper, or throw them in the chat.  
Because I want to see if before I show you what I have discovered, if 
you had the same instinct from your experience as well.  
So now that I have had you think about that, and I'm not going to 
ask for those answers yet because I know you all are talking to each 
other and that is great, you're about to give me the feedback when we 
get here now, but there is one more critical question before I show you 
the data.  
How much can automated testing help us?  I have been in this for a 
long time.  I have been saying it myself, what I think percentage is, of 
how much automation can help.  
So on average what percentage of your issues do you think can be 
found by an automated tool like axe-core?  
Percentage of criteria, do you think it is 15, 20, 25, 50?  Are 
you one of those people that want to write in your own answer?  Awesome, 
do it.  
Because I have news, and it blew me away when I saw it for the 
first time.  It dawned on me that we're sitting on a gold mine at Deque 
because we do audits, full audits, manual audits, that also take manage 
of the axe-core rules, but are fully testing for WCAG 2.1, all the 
screen readers, whatever assistive technology we need to pull out.  I am 
sitting on big data.  And what we decided to do, we have looked at it a 
couple of different ways, is first we looked at it for everything we 
have done over a long period of time.  And then we thought, well, that 
might skew the data because we have some audits in there that are client 
comes, gets that are first audit, makes some fixes and then they get 
that second audit, then their -- yes, they have got it.  That might skew 
the results.  
So we decided to sift through the data and just pull up new 
clients that hadn't done the remediation yet.  Now here is the data.  
And I am about to show you a slide that is very data intense.  I am real 
sensitive that there are, I hope, a large number of people on this 
session right now that can't see the slide because we have an inclusive 
audience so we have some people here that shouldn't see the slide.  So I 
will take some time to describe this, but in the meantime, Noah, if you 
could put a link in to chat where our final report is of this so every 
one can relax and know you can download the report that this is coming 
from.  
 
>> I will do that right now.  
 
>> Fantastic.  Here is the high level notes I am trying to emphasize on 
the slide.  It says Deque comprehensive audits for new clients, so it 
has been restricted just to new clients that haven't done any 
remediation.  And I will have anyone that can see the screen drop to the 
bottom visual on the screen where it says the percentage of automated 
issues found across thousands of pages, and it is hundreds of thousands 
of issues that were found, is 57.38 percent.  Anybody surprised by that?  
I will tell you when I first saw the data, I was like wow.  Is that 
true?  And we dug and verified and dug and verified.  And so that's 
42.62 percent that were found manually.  I want to take a moment to tell 
you that on screen, I have the top 15 most failed success criteria by 
issue count.  Who is surprised that the number one is 17.4 contrast men 
mum, no surprise there.  The next is 4.1 POENT three value.  4.1 DOT 
one, parsing.  1.1.1, non-text content.  That is alt text, you knew it 
was coming.  Followed by focus order, keyboard, focus visible, and I was 
thrilled to see at number nine, non text content, so WCAG 2.1.  And 
number ten, use of color.  No. 11, meaningful sequence.  No. 12, labels 
or instructions.  No. 13, bypass blocks, 14 page titled and 15 language 
of page and then there are the rest.  And that was the other stunning 
moment for me is after I got over the shock of 57.38 percent were found 
immediately, OMG, automation can give us more lift than I realized.  And 
I'm not negating the importance of the manual, and I'm not throwing away 
all 50-S-C, no, no.  But what I am trying to say is that our results are 
out so quickly we have to think about how we use our time and energy.  
And then I saw the other startling thing, and that is those top 15 
success criteria I just read for you, accounted for what percentage of 
all issues?  Hello, 94.54 percent.  Jaw drop.  I hope you're having a 
similar jaw dropping experience about the data, and I hope you're ready 
to pound me with questions because I think this is a really fun 
intellectual debate.  So as we move forward on this, big data, let's 
just go to a simpler visualization and that is 94.54 issues with WCAG 
fell in to 15 success criteria.  57.38 percent were found with axe-core, 
were found with automation.  And 42.62 were found by manual testing.  
So do these big data insights ring true to you, are you ready to 
pepper me with questions and wanting to dig in?  I hope both.  I hope it 
is ringing true and I hope you do want to ask questions because that's 
how we learn more.  One of my favorite movies is Willy Wonka and the 
chocolate factory.  And there is a quote where gene wilder says we have 
so much time and so little to do.  Oh, strike that, reverse it.  And I 
really feel that way on screen, I have a fun visual of Willy Wonka from 
that video.  Where do you want to catch your bugs?  You will have an 
agile script scrum, a ship BL product and a deployed system, where do 
you want to catch your bugs?  You certainly don't want to catch them at 
the back end.  When we do accessibility testing at the end, it is a 
surprise, it is a risk and it is such a higher cost.  And then there is 
back and forth, back and forth between whoever is doing the testing and 
the people in development that doesn't expect these.  And it is just 
expensive. 
 
You know, you have experienced the cost of your 
accessibility bugs.  There is so much less expensive to fix when you're 
solving them and you're still on a white board, versus a little bit 
larger when you're solving it from a design phase.  And yes, you know, 
it is not tiny when you're solving it in a development phase, it is a 
bug, you have to fix it, but it starts getting really big when you're 
solving it at QA or after release.  It is sometimes ten to 15 times or 
more expensive.  So truly the longer it takes to discover an 
accessibility bug, the more it will cost your organization to fix it.  
So really compelling for why we need to shift our accessibility testing 
in to the earlier phases so that we're not catching the accessibility 
problems between oh, I am ready to ship in an hour, or worse, I am 
deployed, to back at least in to the development life cycle.  
Now, these are not the only places you can shift left.  You can 
shift in to your design, you can shift in to your product backlog, but 
right now I will focus in what we can do in that dev cycle, which is 
proactive accessibility testing at the development stage by your really 
smart developers who do like to get things right.
 
 
So speaking of get it right the first time, there is a tool that I 
want to make sure everybody knows about, and it is called axe dev tools.  
On the left of the screen I have proactive axe dev tools, and a picture 
with a shirt I'm wearing that is curly bracket, friends don't let 
friends ship inaccessible code.  Do you know why?  Because developers 
really prefer not no have tickets coming back.  We want to get it right 
the first time.  Versus what so many of us have to experience is the 
more reactive far right testing that feels a little bit like a cartoon 
situation, where you're caught on a treadmill going faster than you can 
even keep up with.  And I do have a little image here from a very old 
cartoon from when I was a child, a George jet son running on the 
dog-walking treadmill and his dog astro is watching and George is saying 
Jane, help, stop this thing.  Because any of us that have lived that 
reactive test, we certainly do want to get off that treadmill.  
So what I would like to introduce you to right now is axe dev tools.  
Some of you may have already experienced axe dev tools, awesome if 
you're ahead of the game.  If you haven't, it is a form of intelligent 
guided tests specifically designed for developers.  And I want to pose 
this question.  The first time I saw these stats I said prove it, prove 
it to me because I don't believe it until I see it with my own eyes.  
Developers with intelligent guided testing, I-G-T, really are finding 
anywhere from 72 to 83 percent of the WCAG issues without being formerly 
trained in accessibility.  It is profound.  Now, those numbers may seem 
like Glenda, what?  I want you to remember that axe KOR is accounting 
for 50 percent.  So we running that, getting 58 percent, and now let me 
show you the intelligent guided testing concept.  
Remembering that the top 15 success criteria, I will just read 
through them quickly by criteria name only, are in order, one, contrast 
minimum, two, name role value, three, info and relationships.  Four, 
parsing.  Five, non-text content.  
No. 6, focus order.  
No. 7, keyboard. .  Eight, focus visible. .  Nine, non text 
contrast.  Ten, use of color.  11, meaningful sequence.  12, labels of 
instructions.  13, bypass blocks.  14 page titled, 15 language of page.  
What can we do to empower our developers?  Anybody curious or wish I 
would stop talking about contrast?  I often look at contrast numbers, 
color contrast 1.4.3 on text and it is big and it can often be solved in 
one place.  So as a thought experiment I thought what if I take this 
data and I drop it.  Let's assume we will fix it, it will get fixed up 
in the C-S-S and it is just gone, solved.  Gang, we're still looking at 
46.31 percent of all of your issues being found automatically and even 
if we dropped 4.1.3, because we solved it, these other account for 
92.2 percent of all the issues you will find if you're like my 
customers.  And I have looked at this deeply with medium size, large, 
small, gigantic and these hold true.  Is it the exact percentage?  No.  
Is it in the ballpark, neighborhood?  Yes, it really is.  
So let me introduce you to intelligent guided testing. 
 
It is a 
powerful new feature the axe dev tools.  Intelligence guided testing for 
developers is, I believe, 18 months old.  Went through a wonderful beta.  
It is completely designed for the developer who is smart, wants to do 
the right thing, and is not an accessibility expert.  What does it focus 
on today?  Today the areas that we have to support for are page 
information, page title and language, headings, lists and images.  
Buttons and links.  Keyboard testing.  Hey, we're getting complex here.  
Modals and focus management?  Wow.  And forms, labels, error messages 
and required fields.  That's some of or a lot of those top 15.  
So if you never experienced axe dev tools before, you can sign up 
for it right now.  There is a free trial, 14 weeks.  You can download it 
today.  Once you get it downloaded, if you're already an axe extension 
user, then you will be used to opening up P inspector, opening up axe 
dev tools and you'll still be able to run the axe core rules to get your 
failures at a critical serious moderate or minor level, to get your 
review issues, but you will see when you get the upgrade to dev tools, 
that there will be an option for guided tests.  And this is where we can 
start lifting developers to get it right the first time without becoming 
tenured experienced accessibility experts too.  
So if you open the guided tests, the areas that will begin to 
populate will be some panels.  And the very first one will be keyboard 
and modal and page information, and they're like wizards.  How many of 
you are in the United States and started working on your taxes?  It 
reminds me of the wonderful tax wizards that help step me through my own 
taxes which are complex to fill out, but you can guide me with well 
designed questions.  
 
 
Let me show you a couple of these panels.  The first one is so 
simple.  I have gone in to the page information one and I pulled up on 
the left a site that is called our awesome recipes site.  It is a 
purposefully inaccessible site that was created for testing purposes, 
thank you.  And within axe dev tools it asks a simple question.  It says 
to me, Glenda, this awesome recipe site with a recipe dashboard, the 
page title is, quote, bracket, insert title here.  Does that accurately 
describe the purpose of the page?  And there's radio buttons, yes, no.  
Easy to answer.  This is something the developers can get right the 
first time.  On keyboard, this one is very powerful.  Open it up and it 
will run automatedly through the page counting and marking the tab 
stops.  I have run it on this awesome recipes page.  It is just 
finished.  I have a result of auto tab successful, 19 tab stops were 
recorded click next to continue.  And it is then going to ask me to you 
see a tab stop where you know developer there is supposed to be 
interactivity?  And if not it will let you pick and show where they are, 
and before you know it minutes later you have finished keyboard testing 
and maybe prevented an accessibility issue from ever making it out of 
development.  A couple other examples, modals, it comes in and says is 
there a trigger for the modal?  And it then checks to make sure that 
once the modal is open, the key control stays inside the modal and 
doesn't fall back out.  And then it asks if there's a trigger to close, 
and then once you close it, it shows you where it is landing and it says 
is this the right place?  Did it return to the right place?  Which is 
developer can fix before it leaves their fingertips.  
 
 
What I would like to do is let me see if I can open this up, so instead 
of screen shots, we can see if we do it live.  Noah, I want you to watch 
chat for me.  If I am not audio describing enough what I am doing, you 
attendees out there hold me to that high standard and please interrupt 
me.  
 
So I opened up a site.  I am in axe dev tools, I have run my first 
scan, it has found 33 total issues, 21 of them are for review.  Because 
I am aware of this site, I am going to go look at just those 21 issues 
and review them so that I can get them out of the way, because those are 
worth reviewing.  And down in the dev tools, it is in the axe dev tools, 
I am inside the inspector and I am in chrome, there is a section that 
says the potential issues need your review, and please save your results 
to review.  So I will press a link that says save results.  It will name 
my test the lovely title of my page, insert title here.  I will save it.  
And I will go back to reviewing those issues, and I will highlight, 
there is an option, I click on the 21 review issues link, I click on the 
highlight option, and sure enough I get a highlight to the section of 
the page where it is now telling me, you know what, there could be a 
color contrast issue there.  And because there is some foreground 
background imagery going on, we're not for sure that automation can 
figure this out. 
 
I now have an option to manually review this and say, 
yes, this is an issue, or no, this is not.  
So I am going to click through and visually look at these 21 
issues, checking if all of this content is visible at a strong enough 
contrast for anybody seeing me do this, the contrast is so strong that I 
didn't even need to pull out any testing or look, it is practically 
black on white.  And so I went ahead and did that.  
Once I am through there, and I didn't have to do that piece, I 
could have gone straight to the guided test.  Here is one of those 
guided tests in live action.  
So if I go and do the simplest test of page information, there is 
a panel that says page information, guided test, not started, zero 
issues found with a link that says start testing page information.  I 
click on it, it says let's get started.  
Is your page in the right state?  Yes, page is the way I want it 
to look before I start.  I press the start, here is what I have given 
you a screen shot of.  The page title is insert title here, does that 
accurately describe the purpose of the page?  No, that does not.  
This is my awesome recipe site for chocolate cake and spaghetti and 
grilled cheese.  So instead of automation running, saying there is a 
language attribute, it is the human being going and this is English and 
that's what the majority of the page is.  Say next.  I now have a 
detailed issue that has been written that has code snippets, how to 
solve, finish, and I spent one minute and I was chitchatting.  Each one 
of these modules have that simple walk through.  I would like to show 
you one more. Ly go no aria modal, because I think it is super cool that 
the axe dev tools team took this one on.  
We the help developers understand this simple test.  So I have 
opened up the intelligent guided test.  Before you hit start, I will 
make sure I put my site in the state I wish, I am good, I am on the 
right page.  It asks me a question, does the modal you would like to 
test have a button which launches it?  Yes.  My modal has a launcher, I 
am familiar with the site.  The page or the modal I will click open is 
the cook chocolate cake, but it is just asking here is there a launcher?  
I say yes.
 
 Now it says please select the launcher that triggers the 
modal you would like to test.  And so I come over here and actually 
click on the cook chocolate cake area, it is in a span on the page.  And 
I click next.  And intelligent guided test triggered the modal, because 
I told it where the trigger was, and then it ran a little test and then 
it says click next to continue.  It is now making sure that focus stays 
within the modal, tells me to click next.  Asks me a simple question, 
can this modal be dismissed or closed?  Yes, it can.  I say next.  It 
dismisses, tries to close, and I see that it is highlighting the whole 
scrolled area of the page as where it is landed back.  And now the 
highlights all the way up here, the whole page.  And it is like, hey, is 
that where I was supposed to go when it was dismissed or closed?  No, 
no.  So the return was not there.  And then it wrote the error for me.  
When the modal is closed, the focus is not returned to the triggering 
element.  Finished.  I finished testing that modal in two minutes on top 
of chitchatting.  
So I have given you two live examples of that.  I will shift back 
to my presentation, encouraging you to sign up for axe dev tool two-week 
free trial.  And let you know that what I have shown you here, is really 
just the beginning.  Because we will not rest until we can make more 
progress in having things be born accessible, empowering developers and 
designers to do the work right without ten years of accessibility 
experience.  We want to help people succeed so you will see more 
intelligence guided tests coming.  So I want you to commit to shift 
left, recognizing that you can save time and money and I swear, I swear, 
you can have your developers with intelligent guided testing literally 
find anywhere from 72 to 83 percent of your accessibility defects during 
development.  I am not kidding, it is true.  Does that mean manually 
testing experts like me will cease to exist and get old and dusty? 
  
   No, 
think we will still be here, but I would like to have you get that 
fixed, no the stuff at the 68 percent that we can fix at the developer 
level.  So I want to ask, are you ready to step in Willy Wonka's 
elevator with me.  I am showing you where they have just stepped in.  
They're a little nervous.  Why am I asking you to step in here with me?  
We need to break through this imaginary glass ceiling we put above our 
heads.  We need to think different, we need to not just do it the same 
old way.  Think sideways, think inside out, think upside down, just like 
the VA T O.R..  because we really can make our accessibility dreams come 
true.  I believe it, and we owe it to all the people that came before us 
that built the foundation that our careers are currently built on.  So 
let's make a better future.  Time for questions.  
Noah, does anybody have any questions?  
  
  
 
>> Oh, Glenda, what an active awesome bunch.  I have been fervently 
trying to answer as many questions as I can.  What a great bunch of 
people.  Glenda, awesome presentation as always.  
There were some questions I didn't answer because I wanted you to 
weigh on.  How long does it typically take for accessibility testing if 
you're doing a manual comprehensive audit?  And in your estimation how 
might a tool like I-G-T-E impact that time?  
 
>> So that's something we're experimenting on now.  What I would say is 
if we're looking at full comprehensive, all 50 success criteria for WCAG 
2.1 and who knows what 2.2 will end up adding, right now we're looking 
at the efficiencies and moving the intelligent guided testing tool in 
there for our experts, but I will tell you that our primary goal at this 
moment is not to make it easier for the experts, but to make it easier 
for the innocent and intelligent developers so we can stop this further 
left.  So I don't have the data yet.  But I believe, here is what I 
believe.  I run experiments all the time.  If a person tests a 
relatively complex page as in your average web page out there and they 
spent less than two hours on it, I think they have a section they're 
missing.  And if they spent eight hours -- we can spend all day there.  
We can spend all day there.  So I hope, I hope that this will move in to 
the accessibility expert field, but our first focus is let's get the 
developers to do it.  So it is not designed for me ideally yet.  
 
>> Yes, I agree.  Just for my own experience of talking in the field, it 
is exactly how the discussions go.  Right now it is about that shift 
left and using it upstream.  When you talk about efficiency gained 
within a depth process, it is 80 percent of your tickets, you don't have 
to write all the tickets.  It takes testing, but it still takes time.  
 
>> I will say one more thing.  It is hard become accessibility expert 
inside a company that cares passionately about this and to recognize, I 
am not their primary user target.  It is that developer.  So it is 
really important that we not pollute the design to solve my needs, but 
that we design it for the developer so we can get that max lift at that 
level.  And I'm a secondary customer.  Accessibility experts are a 
secondary customer.  We are still working on it, fun question, I could 
talk about it all day.  
 
>> You and me both.  There was an another question that I thought was 
interesting.  There was a lot of data behind the research, and I tried 
to answer questions on the coverage we port because there is a lot of 
break down op how we source the data and what sort of applications, but 
there was a question about did we look at sort of bucketing the 
applications of websites by specific types, like read heavy applications 
and how that might affect the data?  Did we slice or dice the data?  
 
>> We did, we did slice and dice.  It was hilarious how little it 
changed things.  I think the only one that had like a significant change 
was some E-D-U sites.  So we did slice it and it was so nonuseful that 
we didn't include it in the report.  
 
>> Interesting.  In a certain way, that was very hardening, so general, 
that we really did get a good global view on that.  
 
 
 
>> It is very broad spectrum.  And what is fascinating about our client 
base is how, diverse of client base it is.  One day I will talk to 
somebody heavy on interactive games, and another day somebody that is 
financial.  It runs the whole gamut.  
 
>> There are a lot of questions if we will have similar tools for native 
mobile environments?  
 
>> And you know what, Noah, we need to answer that question after the 
fact.  Can you find a way for us to answer that to whoever asked it, 
because I don't know the answer but of course we want to.  And I don't 
want to guess.  
 
>> Totally agree.  For everybody's awareness all the Q&A will be 
archived and captured so we will reply where we can.  Like you said, it 
is the kind of thing that we are doing research on and stuff, but beyond 
that, there is not much more I can say.  
There are some questions about kind of comparing and contrasting tools, 
like the axe tool and the intelligence guided tools to some of the other 
awesome free accessible tools like wave and accessibility insight.  Do 
you have any general comments on how these things compare?  
 
  >> So what I have experienced is I haven't seen true intelligence guided 
testing in wave.  I see good automatic coverage in waive but not I-G-T.  
And the accessibility insights we have looked deeply at, and there were 
some pieces that have some similarities, but -- and I am a purist.  I 
have been doing this for 20 years, I come from an E-D-U background.  I 
think it is in the vein of this but it is not as advanced in how much it 
helps and lift as well as very focused on developers getting through 
quickly and accurately.  
 
 
 
>> Agree.  Another question here, and I love how specific this question 
is.  From nick the geek, one of the biggest issues that come up is 
minimum contrast and one of the more difficult issues to test is text 
over variable background like images and gradients.  Are there plans to 
add testing to this for dev tools?  
 
>> So I do think that anything, anything that is up in those high 
coverage areas, those top 16 things in our criteria are in our space 
now.  Not to say those are the only 15 we're looking at, because we're 
also looking at things that even though it might not be something we can 
automate, we can certainly guide you through it.  And let me give you an 
example.  
Testing multi media for captions and audio descriptions is not 
something we can automate fully at this point, or I haven't figured it 
out.  But KA mar is working right now on intelligence guided testing for 
that topic.  So yes, we want to cover the gamut.  
 
>> Yes, as much as possible.  That makes sense.  We have five minutes 
left in our session so we will keep cruising through these.  
This is actually kind of related to what I think you just said.  
Is there a way to use the axe tool to test web and media in the browser, 
or only content on the page?  
 
>> There will be.  I was just talking about it last night.  I don't know 
what her release is on it, and I don't know if that was news I was not 
supposed to share because we she will talk about that later.  I don't 
know if you noticed, but she gets pretty passionate.  She is like a 
freight train, so I anticipate you will see that one pop up sooner 
rather than later.  
 
>> Yes, exciting stuff on that.  Somebody asked, I would like to 
contribute my data team and experiments on testing medical solutions 
like electronic health records an patient health records.  Do you have 
any advice for anything like that?  
 
  
 
>> So I think that the intelligence guided tests in dev tools, 
especially if you have a web interface you want to run this on, I think 
you can have competitions and have work sessions with your developers.  
And have them hack and see what is the fastest way for them to get 
through.  And when we did these things, we had experiments, the first 
was named pretty experiment.  We named them after whoever came up with 
it.  And that competition inspires people to think differently about 
making this work more doable.  And know this, as your developers go 
through this, there was one interesting thing I discovered.  Remember 
when I showed you the page information screen where it asks those two 
simple questions of is this an appropriate title and is this appropriate 
language?  Well, I can finish that testing so quickly, reliably, that my 
results at the end are the amount of time that I spent with 0 minutes 
and that makes me mad.  I am like, hey, I spent 30 whole seconds in 
there and I finished a section.  Small things like that.  What drives 
the developer, what makes it sticky, what makes them want to do this and 
be successful.  So I would run hack days and experiment days.  
 
>> Interesting.  Sounds like fun.  Several questions along this line, is 
Deque doing sort of analogous research with our native mobile sets and 
tools?  Is that something we're actively working on?  
 
>> It is, I just only have so much room in my brain so I don't have that 
data in it, but yes, we have a dedicated team working on that.  I just 
don't have the knowledge right now to tell them that.  
 
>> And I will make sure to communicate that out to people, that we're 
working on it, we want those numbers as much as the HTML but we just 
happened to get the web stuff first.  
 
>> Yep.  
 
>> I tried to answer as many of these as I could on the fly.  
 
>> I want to ask people, do you feel that this data is compelling and 
that the way we have been talking about automated issues where we're 
only counting S-C, like which S-C were addressed as to posed to truly 
counting the impact of the number of percentage of issues that are found 
automated to manual.  Is this compelling to you or are you like, yeah, 
whatever Glenda?  
Because it made my day and I want to know if it made yours.  And I know 
there is a 22 second delay.  
 
>> There is a 22 second delay, so we will see how it goes on the chat.  
We are at the hour.  I will watch this as we wrap up.  But just to keep 
everybody as close to schedule as possible, so thank you Glenda for 
obviously sharing your time with us in this awesome research and great 
session.  Thank you to everybody for attending.  Really blown away at 
the level of engagement and questions.  Please enjoy the rest of your 
axe con.  Thank you and have a great rest of your day.  
 
>> Bye.  Thank you.  
 
This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

Profile

amazonv: (Default)
amazonv

December 2025

S M T W T F S
 123456
789101112 13
14 15 16 17 18 19 20
21 22 2324252627
282930 31   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 15th, 2026 01:50 pm
Powered by Dreamwidth Studios