AI-RESEARCHER-2024 commited on
Commit
bc2815c
Β·
verified Β·
1 Parent(s): 5a491ef

Create app.py

Browse files
Files changed (1) hide show
  1. app.py +831 -0
app.py ADDED
@@ -0,0 +1,831 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ CICE 2.0 Healthcare Assessment Tool - Gradio Version
4
+ Converts the Google Colab notebook to a deployable Gradio application
5
+ """
6
+
7
+ import gradio as gr
8
+ import google.generativeai as genai
9
+ import os
10
+ import time
11
+ from datetime import datetime
12
+ import re
13
+ from gtts import gTTS
14
+ import tempfile
15
+ import io
16
+ import base64
17
+
18
+ # Configuration
19
+ GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY")
20
+ if not GOOGLE_API_KEY:
21
+ raise ValueError("GOOGLE_API_KEY environment variable must be set. Please add it to your HuggingFace Space secrets.")
22
+ genai.configure(api_key=GOOGLE_API_KEY)
23
+
24
+ class CICEAssessment:
25
+ def __init__(self):
26
+ self.model = genai.GenerativeModel("gemini-2.0-flash-exp")
27
+
28
+ def analyze_video(self, video_path):
29
+ """Analyze video using the 18-point CICE 2.0 assessment"""
30
+
31
+ if not video_path:
32
+ return "❌ No video file provided. Please upload a video file."
33
+
34
+ try:
35
+ print("πŸ“€ Uploading video to Gemini...")
36
+ video_file = genai.upload_file(path=video_path, display_name="healthcare_interaction")
37
+
38
+ # Wait for processing
39
+ print("⏳ Processing video (this may take 1-2 minutes)...")
40
+ max_wait = 300
41
+ wait_time = 0
42
+ while video_file.state.name == "PROCESSING" and wait_time < max_wait:
43
+ time.sleep(3)
44
+ wait_time += 3
45
+ video_file = genai.get_file(video_file.name)
46
+
47
+ if video_file.state.name == "FAILED":
48
+ return "❌ Video processing failed. Please try again with a different video file."
49
+
50
+ print("πŸ€– Analyzing with CICE 2.0 criteria...")
51
+
52
+ # THE 18-POINT CICE 2.0 ASSESSMENT PROMPT
53
+ prompt = """Analyze this healthcare team interaction video and provide a comprehensive assessment based on the CICE 2.0 instrument's 18 interprofessional competencies.
54
+
55
+ For EACH of the following 18 competencies, clearly state whether it was "OBSERVED" or "NOT OBSERVED" and provide specific examples with timestamps when possible:
56
+
57
+ 1. IDENTIFIES FACTORS INFLUENCING HEALTH STATUS
58
+ - Did anyone verbalize factors affecting the patient's health (medical history, social determinants, lifestyle factors)?
59
+
60
+ 2. IDENTIFIES TEAM GOALS FOR THE PATIENT
61
+ - Were specific team goals for the patient discussed or established?
62
+
63
+ 3. PRIORITIZES GOALS FOCUSED ON IMPROVING HEALTH OUTCOMES
64
+ - Was there clear prioritization of goals to improve patient health outcomes?
65
+
66
+ 4. VERBALIZES DISCIPLINE-SPECIFIC ROLE
67
+ - Did team members introduce themselves and clearly state their professional role (e.g., "I'm Dr. Smith, the attending physician")?
68
+
69
+ 5. OFFERS TO SEEK GUIDANCE FROM COLLEAGUES
70
+ - Did anyone express uncertainty and offer to consult with colleagues of the same discipline when unsure?
71
+
72
+ 6. COMMUNICATES ABOUT COST-EFFECTIVE AND TIMELY CARE
73
+ - Was there discussion about generic medications, diagnostic utility, or efficient care delivery?
74
+
75
+ 7. DIRECTS QUESTIONS TO OTHER HEALTH PROFESSIONALS BASED ON EXPERTISE
76
+ - Were questions appropriately directed to specific team members based on their expertise?
77
+
78
+ 8. AVOIDS DISCIPLINE-SPECIFIC TERMINOLOGY
79
+ - Did team members avoid or explain medical jargon, acronyms, and abbreviations when speaking?
80
+
81
+ 9. EXPLAINS DISCIPLINE-SPECIFIC TERMINOLOGY WHEN NECESSARY
82
+ - When technical terms were used, were they explained professionally when clarification was needed?
83
+
84
+ 10. COMMUNICATES ROLES AND RESPONSIBILITIES CLEARLY
85
+ - Were individual responsibilities and roles clearly articulated?
86
+
87
+ 11. ENGAGES IN ACTIVE LISTENING
88
+ - Was there evidence of active listening through verbal acknowledgments, nonverbal cues, or engaging responses?
89
+
90
+ 12. SOLICITS AND ACKNOWLEDGES PERSPECTIVES
91
+ - Did team members actively ask for and acknowledge input from other team members?
92
+
93
+ 13. RECOGNIZES APPROPRIATE CONTRIBUTIONS
94
+ - Was there verbal or nonverbal recognition when team members made valuable contributions to patient care?
95
+
96
+ 14. RESPECTFUL OF OTHER TEAM MEMBERS
97
+ - Was professionalism maintained? Were team members' expertise and lived experiences recognized?
98
+
99
+ 15. COLLABORATIVELY WORKS THROUGH INTERPROFESSIONAL CONFLICTS
100
+ - If disagreements occurred, were they handled professionally and collaboratively?
101
+
102
+ 16. REFLECTS ON STRENGTHS OF TEAM INTERACTIONS
103
+ - Did anyone comment on what went well in the team interaction?
104
+
105
+ 17. REFLECTS ON CHALLENGES OF TEAM INTERACTIONS
106
+ - Were difficulties or areas for improvement explicitly discussed?
107
+
108
+ 18. IDENTIFIES HOW TO IMPROVE TEAM EFFECTIVENESS
109
+ - Were specific suggestions made for improving future team collaboration?
110
+
111
+ STRUCTURE YOUR RESPONSE AS FOLLOWS:
112
+
113
+ ## OVERALL ASSESSMENT
114
+ Provide a brief overview of the team interaction quality and professionalism.
115
+
116
+ ## DETAILED COMPETENCY EVALUATION
117
+ For each of the 18 competencies, format as:
118
+
119
+ Competency [number]: [name]
120
+ Status: [OBSERVED/NOT OBSERVED]
121
+ Evidence: [Specific examples from the video, or explanation of why it wasn't observed]
122
+
123
+ ## STRENGTHS
124
+ List 3-5 key strengths observed in the team interaction
125
+
126
+ ## AREAS FOR IMPROVEMENT
127
+ List 3-5 specific areas where the team could improve
128
+
129
+ ## RECOMMENDATIONS
130
+ Provide 3-5 actionable recommendations for enhancing team collaboration and patient care
131
+
132
+ ## FINAL SCORE
133
+ Competencies Observed: X/18
134
+ Overall Performance Level: [Exemplary/Proficient/Developing/Needs Improvement]"""
135
+
136
+ response = self.model.generate_content([video_file, prompt])
137
+ return response.text
138
+
139
+ except Exception as e:
140
+ return f"❌ Error during assessment: {str(e)}"
141
+
142
+ def generate_audio_feedback(self, text):
143
+ """Convert assessment text to audio feedback"""
144
+
145
+ try:
146
+ # Clean text for speech
147
+ clean_text = re.sub(r'[#*_\[\]()]', ' ', text)
148
+ clean_text = re.sub(r'\s+', ' ', clean_text)
149
+ clean_text = re.sub(r'[-β€’Β·]\s+', '', clean_text)
150
+
151
+ # Limit text length for audio (gTTS has limits)
152
+ if len(clean_text) > 5000:
153
+ clean_text = clean_text[:5000] + "... Assessment continues in the text report."
154
+
155
+ # Generate audio with gTTS
156
+ with tempfile.NamedTemporaryFile(delete=False, suffix=".mp3") as tmp_file:
157
+ tts = gTTS(text=clean_text, lang='en', slow=False)
158
+ tts.save(tmp_file.name)
159
+ return tmp_file.name
160
+
161
+ except Exception as e:
162
+ print(f"⚠️ Audio generation failed: {str(e)}")
163
+ return None
164
+
165
+ def create_summary_report(self, assessment_text):
166
+ """Create a visual summary of competencies observed"""
167
+
168
+ # Parse the assessment to count observed competencies
169
+ observed_count = assessment_text.lower().count("observed") - assessment_text.lower().count("not observed")
170
+ total_competencies = 18
171
+ percentage = (observed_count / total_competencies) * 100
172
+
173
+ # Determine performance level
174
+ if percentage >= 85:
175
+ level = "Exemplary"
176
+ color = "#059669"
177
+ elif percentage >= 70:
178
+ level = "Proficient"
179
+ color = "#0891b2"
180
+ elif percentage >= 50:
181
+ level = "Developing"
182
+ color = "#f59e0b"
183
+ else:
184
+ level = "Needs Improvement"
185
+ color = "#dc2626"
186
+
187
+ summary_html = f"""
188
+ <div style="max-width:800px; margin:20px auto; padding:30px; background:white; border-radius:15px; box-shadow:0 4px 6px rgba(0,0,0,0.1);">
189
+ <h2 style="text-align:center; color:#1f2937;">CICE 2.0 Assessment Summary</h2>
190
+
191
+ <div style="display:flex; justify-content:space-around; margin:30px 0;">
192
+ <div style="text-align:center;">
193
+ <div style="font-size:48px; font-weight:bold; color:{color};">{observed_count}/{total_competencies}</div>
194
+ <div style="color:#6b7280;">Competencies Observed</div>
195
+ </div>
196
+ <div style="text-align:center;">
197
+ <div style="font-size:48px; font-weight:bold; color:{color};">{percentage:.0f}%</div>
198
+ <div style="color:#6b7280;">Overall Score</div>
199
+ </div>
200
+ </div>
201
+
202
+ <div style="text-align:center; padding:20px; background:#f9fafb; border-radius:10px;">
203
+ <div style="font-size:24px; font-weight:bold; color:{color};">Performance Level: {level}</div>
204
+ </div>
205
+
206
+ <div style="margin-top:30px;">
207
+ <h3>πŸ“‹ CICE 2.0 Competency Areas:</h3>
208
+ <ol style="line-height:1.8; color:#374151;">
209
+ <li>Health Status Factors</li>
210
+ <li>Team Goals Identification</li>
211
+ <li>Goal Prioritization</li>
212
+ <li>Role Verbalization</li>
213
+ <li>Seeking Guidance</li>
214
+ <li>Cost-Effective Communication</li>
215
+ <li>Expertise-Based Questions</li>
216
+ <li>Avoiding Jargon</li>
217
+ <li>Explaining Terminology</li>
218
+ <li>Clear Role Communication</li>
219
+ <li>Active Listening</li>
220
+ <li>Soliciting Perspectives</li>
221
+ <li>Recognizing Contributions</li>
222
+ <li>Team Respect</li>
223
+ <li>Conflict Resolution</li>
224
+ <li>Strength Reflection</li>
225
+ <li>Challenge Reflection</li>
226
+ <li>Improvement Identification</li>
227
+ </ol>
228
+ </div>
229
+ </div>
230
+ """
231
+
232
+ return summary_html
233
+
234
+ def process_video_assessment(video_file):
235
+ """Main function to process video and return assessment results"""
236
+
237
+ if not video_file:
238
+ return "❌ Please record or upload a video file to analyze.", "", None
239
+
240
+ # Initialize assessment tool
241
+ assessor = CICEAssessment()
242
+
243
+ # Analyze the video
244
+ assessment_result = assessor.analyze_video(video_file)
245
+
246
+ if assessment_result.startswith("❌"):
247
+ return assessment_result, "", None
248
+
249
+ # Create summary report
250
+ summary_html = assessor.create_summary_report(assessment_result)
251
+
252
+ # Generate audio feedback
253
+ audio_path = assessor.generate_audio_feedback(assessment_result)
254
+
255
+ return assessment_result, summary_html, audio_path
256
+
257
+ def save_recorded_video(video_data_base64):
258
+ """Save the recorded video from base64 data"""
259
+ try:
260
+ # Decode base64 video data
261
+ video_data = base64.b64decode(video_data_base64)
262
+
263
+ # Create temporary file
264
+ with tempfile.NamedTemporaryFile(delete=False, suffix=".webm") as tmp_file:
265
+ tmp_file.write(video_data)
266
+ return tmp_file.name
267
+
268
+ except Exception as e:
269
+ print(f"Error saving recorded video: {e}")
270
+ return None
271
+
272
+ def create_video_recorder_html():
273
+ """Create the HTML/JavaScript for video recording interface"""
274
+
275
+ recorder_html = """
276
+ <div id="video-recorder-container" style="text-align: center; padding: 20px; background-color: #f0f0f0; border-radius: 10px;">
277
+ <h3>πŸ“Ή Video Recorder - Healthcare Team Interaction</h3>
278
+ <video id="video" width="640" height="480" autoplay style="border: 2px solid #333; border-radius: 5px;"></video><br>
279
+ <div style="margin: 15px 0;">
280
+ <button id="start" class="btn" style="background-color: #4CAF50; color: white; padding: 10px 20px; margin: 5px; border: none; border-radius: 4px; cursor: pointer; font-size: 16px;">πŸ”΄ Start Recording</button>
281
+ <button id="stop" class="btn" disabled style="background-color: #f44336; color: white; padding: 10px 20px; margin: 5px; border: none; border-radius: 4px; cursor: pointer; opacity: 0.5; font-size: 16px;">⏹️ Stop Recording</button>
282
+ <button id="reset" class="btn" style="background-color: #2196F3; color: white; padding: 10px 20px; margin: 5px; border: none; border-radius: 4px; cursor: pointer; font-size: 16px;">πŸ”„ Reset</button>
283
+ </div>
284
+ <div id="timer" style="margin-top: 10px; font-size: 18px; font-weight: bold; color: #333;">Ready to record</div>
285
+ <div id="status" style="margin-top: 10px; color: #666; font-size: 14px;"></div>
286
+ <div id="fileStatus" style="margin-top: 10px; color: #0066cc; font-weight: bold;"></div>
287
+
288
+ <!-- Hidden elements for data transfer -->
289
+ <input type="hidden" id="recorded-video-data" />
290
+ <input type="hidden" id="recording-complete" value="false" />
291
+ </div>
292
+
293
+ <script>
294
+ (function() {
295
+ var video = document.querySelector('#video');
296
+ var startBtn = document.querySelector('#start');
297
+ var stopBtn = document.querySelector('#stop');
298
+ var resetBtn = document.querySelector('#reset');
299
+ var timerDiv = document.querySelector('#timer');
300
+ var statusDiv = document.querySelector('#status');
301
+ var fileStatusDiv = document.querySelector('#fileStatus');
302
+ var recordedDataInput = document.querySelector('#recorded-video-data');
303
+ var recordingCompleteInput = document.querySelector('#recording-complete');
304
+
305
+ var mediaRecorder;
306
+ var recordedBlobs = [];
307
+ var stream;
308
+ var startTime;
309
+ var timerInterval;
310
+ var recordingCount = 0;
311
+
312
+ // Clear any previous status
313
+ fileStatusDiv.textContent = '';
314
+ statusDiv.textContent = 'Initializing camera and microphone...';
315
+
316
+ // Request camera and microphone access
317
+ navigator.mediaDevices.getUserMedia({
318
+ video: {
319
+ width: { ideal: 640, max: 640 },
320
+ height: { ideal: 480, max: 480 },
321
+ frameRate: { ideal: 24, max: 30 }
322
+ },
323
+ audio: {
324
+ echoCancellation: true,
325
+ noiseSuppression: true,
326
+ sampleRate: 44100
327
+ }
328
+ })
329
+ .then(function(s) {
330
+ stream = s;
331
+ video.srcObject = stream;
332
+ statusDiv.textContent = 'βœ… Camera and microphone ready! Click Start to begin recording.';
333
+ fileStatusDiv.textContent = 'No recording yet. Click Start to begin.';
334
+ })
335
+ .catch(function(error) {
336
+ console.error('Error accessing media devices:', error);
337
+ statusDiv.textContent = '❌ Error: ' + error.message + '. Please allow camera/microphone access and refresh.';
338
+ statusDiv.style.color = 'red';
339
+ });
340
+
341
+ function updateTimer() {
342
+ var elapsed = Math.floor((Date.now() - startTime) / 1000);
343
+ var minutes = Math.floor(elapsed / 60);
344
+ var seconds = elapsed % 60;
345
+ timerDiv.textContent = 'πŸ”΄ Recording: ' + minutes.toString().padStart(2, '0') + ':' + seconds.toString().padStart(2, '0');
346
+
347
+ var maxDuration = 900; // 15 minutes
348
+ var remaining = maxDuration - elapsed;
349
+ if (remaining > 0) {
350
+ var remMin = Math.floor(remaining / 60);
351
+ var remSec = remaining % 60;
352
+ statusDiv.textContent = 'Time remaining: ' + remMin + 'm ' + remSec + 's';
353
+ }
354
+
355
+ // Auto-stop at duration limit
356
+ if (elapsed >= maxDuration) {
357
+ console.log('Max duration reached, auto-stopping...');
358
+ stopBtn.click();
359
+ }
360
+ }
361
+
362
+ startBtn.addEventListener('click', function() {
363
+ if (!stream) {
364
+ alert('Please allow camera and microphone access first!');
365
+ return;
366
+ }
367
+
368
+ // Clear previous recording data
369
+ recordedBlobs = [];
370
+ recordingCount++;
371
+ recordingCompleteInput.value = 'false';
372
+
373
+ fileStatusDiv.textContent = 'πŸ“Ή Recording in progress...';
374
+ fileStatusDiv.style.color = '#ff6600';
375
+
376
+ // Configure recording options
377
+ var options = {
378
+ mimeType: 'video/webm;codecs=vp8,opus',
379
+ videoBitsPerSecond: 800000,
380
+ audioBitsPerSecond: 128000
381
+ };
382
+
383
+ // Fallback if codec not supported
384
+ if (!MediaRecorder.isTypeSupported(options.mimeType)) {
385
+ options = {
386
+ mimeType: 'video/webm',
387
+ bitsPerSecond: 1000000
388
+ };
389
+ }
390
+
391
+ try {
392
+ mediaRecorder = new MediaRecorder(stream, options);
393
+ } catch (e) {
394
+ console.error('Error creating MediaRecorder:', e);
395
+ alert('Error creating MediaRecorder: ' + e.message);
396
+ return;
397
+ }
398
+
399
+ startBtn.disabled = true;
400
+ startBtn.style.opacity = '0.5';
401
+ stopBtn.disabled = false;
402
+ stopBtn.style.opacity = '1';
403
+
404
+ startTime = Date.now();
405
+ timerInterval = setInterval(updateTimer, 1000);
406
+ updateTimer();
407
+
408
+ mediaRecorder.ondataavailable = function(event) {
409
+ if (event.data && event.data.size > 0) {
410
+ recordedBlobs.push(event.data);
411
+ console.log('Data chunk received, size:', event.data.size);
412
+ }
413
+ };
414
+
415
+ mediaRecorder.onstop = function() {
416
+ clearInterval(timerInterval);
417
+ timerDiv.textContent = '⏹️ Recording stopped - Processing...';
418
+ statusDiv.textContent = 'Processing and preparing video for analysis...';
419
+ fileStatusDiv.textContent = 'πŸ’Ύ Creating video file...';
420
+ fileStatusDiv.style.color = '#0066cc';
421
+
422
+ // Process the recorded video
423
+ setTimeout(function() {
424
+ try {
425
+ // Combine all recorded chunks into a single blob
426
+ var blob = new Blob(recordedBlobs, {type: 'video/webm'});
427
+ console.log('Total video size:', blob.size, 'bytes');
428
+
429
+ // Convert to base64
430
+ var reader = new FileReader();
431
+ reader.readAsDataURL(blob);
432
+ reader.onloadend = function() {
433
+ var base64data = reader.result.split(',')[1];
434
+
435
+ // Store the base64 data in hidden input
436
+ recordedDataInput.value = base64data;
437
+ recordingCompleteInput.value = 'true';
438
+
439
+ timerDiv.textContent = 'βœ… Recording complete!';
440
+ statusDiv.textContent = 'Video ready for analysis. Click "Analyze Recording" below.';
441
+ statusDiv.style.color = 'green';
442
+ fileStatusDiv.textContent = 'βœ… Video recorded successfully (' +
443
+ (blob.size / 1024 / 1024).toFixed(2) + ' MB)';
444
+ fileStatusDiv.style.color = 'green';
445
+
446
+ // Trigger a change event to notify Gradio
447
+ recordingCompleteInput.dispatchEvent(new Event('change'));
448
+
449
+ // Clear recorded data to free memory
450
+ recordedBlobs = [];
451
+ };
452
+ } catch (e) {
453
+ console.error('Error processing video:', e);
454
+ statusDiv.textContent = '❌ Error processing video: ' + e.message;
455
+ statusDiv.style.color = 'red';
456
+ fileStatusDiv.textContent = '❌ Video processing failed!';
457
+ fileStatusDiv.style.color = 'red';
458
+ }
459
+ }, 100);
460
+ };
461
+
462
+ // Start recording with periodic data collection
463
+ mediaRecorder.start(5000); // Collect data every 5 seconds
464
+ console.log('Recording started - session #' + recordingCount);
465
+ statusDiv.style.color = '#666';
466
+ });
467
+
468
+ stopBtn.addEventListener('click', function() {
469
+ if (mediaRecorder && mediaRecorder.state === 'recording') {
470
+ console.log('Stopping recording...');
471
+ mediaRecorder.stop();
472
+ startBtn.disabled = false;
473
+ startBtn.style.opacity = '1';
474
+ stopBtn.disabled = true;
475
+ stopBtn.style.opacity = '0.5';
476
+ }
477
+ });
478
+
479
+ resetBtn.addEventListener('click', function() {
480
+ // Reset the interface
481
+ if (mediaRecorder && mediaRecorder.state === 'recording') {
482
+ mediaRecorder.stop();
483
+ }
484
+ recordedBlobs = [];
485
+ recordedDataInput.value = '';
486
+ recordingCompleteInput.value = 'false';
487
+
488
+ startBtn.disabled = false;
489
+ startBtn.style.opacity = '1';
490
+ stopBtn.disabled = true;
491
+ stopBtn.style.opacity = '0.5';
492
+ timerDiv.textContent = 'Ready to record';
493
+ statusDiv.textContent = 'βœ… Camera and microphone ready! Click Start to begin recording.';
494
+ statusDiv.style.color = '#666';
495
+ fileStatusDiv.textContent = 'Interface reset. Ready for new recording.';
496
+ fileStatusDiv.style.color = '#0066cc';
497
+ });
498
+ })();
499
+ </script>
500
+ """
501
+
502
+ return recorder_html
503
+
504
+ def get_recorded_video_data():
505
+ """Get the recorded video data from the hidden input"""
506
+ # This will be called by Gradio when the recording is complete
507
+ return None
508
+
509
+ def process_recorded_video(recording_complete, video_data_base64):
510
+ """Process the recorded video when recording is complete"""
511
+
512
+ if recording_complete != 'true' or not video_data_base64:
513
+ return "❌ No recording available. Please record a video first.", "", None
514
+
515
+ try:
516
+ # Save the recorded video
517
+ video_path = save_recorded_video(video_data_base64)
518
+
519
+ if not video_path:
520
+ return "❌ Failed to save recorded video.", "", None
521
+
522
+ # Process the video
523
+ return process_video_assessment(video_path)
524
+
525
+ except Exception as e:
526
+ return f"❌ Error processing recorded video: {str(e)}", "", None
527
+
528
+ def ask_question_about_assessment(question, video_file_or_data, assessment_result, is_recorded_video=False):
529
+ """Allow users to ask specific questions about the assessment"""
530
+
531
+ if not assessment_result:
532
+ return "❌ Please record/upload and analyze a video first before asking questions."
533
+
534
+ if not question.strip():
535
+ return "❌ Please enter a question about the assessment."
536
+
537
+ try:
538
+ # Handle recorded video vs uploaded video
539
+ if is_recorded_video and video_file_or_data:
540
+ # For recorded video, save the base64 data first
541
+ video_path = save_recorded_video(video_file_or_data)
542
+ if not video_path:
543
+ return "❌ Could not process recorded video for Q&A."
544
+ uploaded_video = genai.upload_file(path=video_path, display_name="healthcare_interaction_qa")
545
+ elif video_file_or_data:
546
+ # For uploaded video file
547
+ uploaded_video = genai.upload_file(path=video_file_or_data, display_name="healthcare_interaction_qa")
548
+ else:
549
+ return "❌ No video available for Q&A."
550
+
551
+ # Wait for processing
552
+ max_wait = 60
553
+ wait_time = 0
554
+ while uploaded_video.state.name == "PROCESSING" and wait_time < max_wait:
555
+ time.sleep(2)
556
+ wait_time += 2
557
+ uploaded_video = genai.get_file(uploaded_video.name)
558
+
559
+ model = genai.GenerativeModel("gemini-2.0-flash-exp")
560
+
561
+ prompt = f"""Based on the CICE 2.0 assessment of this healthcare team video,
562
+ please answer this specific question: {question}
563
+
564
+ Refer to the relevant competencies from the 18-point CICE framework in your answer.
565
+
566
+ Previous assessment results:
567
+ {assessment_result[:2000]}...""" # Truncate to avoid token limits
568
+
569
+ response = model.generate_content([uploaded_video, prompt])
570
+ return response.text
571
+
572
+ except Exception as e:
573
+ return f"❌ Error answering question: {str(e)}"
574
+
575
+ # Global variables to store assessment state
576
+ current_video = None
577
+ current_assessment = None
578
+ current_video_data = None
579
+ is_current_recorded = False
580
+
581
+ def store_assessment_state(video_file, assessment_result, summary_html, audio_path, video_data=None, is_recorded=False):
582
+ """Store the current assessment state for Q&A"""
583
+ global current_video, current_assessment, current_video_data, is_current_recorded
584
+ current_video = video_file
585
+ current_assessment = assessment_result
586
+ current_video_data = video_data
587
+ is_current_recorded = is_recorded
588
+ return assessment_result, summary_html, audio_path
589
+
590
+ def qa_wrapper(question):
591
+ """Wrapper for Q&A functionality"""
592
+ global current_video, current_assessment, current_video_data, is_current_recorded
593
+
594
+ if is_current_recorded:
595
+ return ask_question_about_assessment(question, current_video_data, current_assessment, True)
596
+ else:
597
+ return ask_question_about_assessment(question, current_video, current_assessment, False)
598
+
599
+ # Create Gradio Interface
600
+ def create_gradio_app():
601
+ """Create the main Gradio application"""
602
+
603
+ with gr.Blocks(title="CICE 2.0 Healthcare Assessment Tool", theme=gr.themes.Soft()) as app:
604
+
605
+ gr.Markdown("""
606
+ # πŸ₯ CICE 2.0 Healthcare Assessment Tool
607
+
608
+ Record live healthcare team interactions or upload existing videos to receive comprehensive assessment based on the 18-point CICE 2.0 interprofessional competency framework.
609
+
610
+ **Features:**
611
+ - πŸŽ₯ **Live Video Recording** with audio (just like in Colab!)
612
+ - πŸ“ Video file upload support
613
+ - βœ… 18-point competency evaluation
614
+ - πŸ“Š Visual summary report
615
+ - πŸ”Š Audio feedback
616
+ - πŸ’¬ Interactive Q&A about results
617
+ - πŸ“₯ Downloadable assessment report
618
+ """)
619
+
620
+ with gr.Tab("πŸŽ₯ Record Video & Assess"):
621
+ gr.Markdown("### Record Live Healthcare Team Interaction")
622
+
623
+ # Video recorder interface
624
+ recorder_interface = gr.HTML(
625
+ value=create_video_recorder_html(),
626
+ label="Video Recorder"
627
+ )
628
+
629
+ # Hidden inputs to capture recording data
630
+ recording_complete = gr.Textbox(
631
+ value="false",
632
+ visible=False,
633
+ elem_id="recording-complete"
634
+ )
635
+
636
+ recorded_video_data = gr.Textbox(
637
+ value="",
638
+ visible=False,
639
+ elem_id="recorded-video-data"
640
+ )
641
+
642
+ with gr.Row():
643
+ analyze_recording_btn = gr.Button(
644
+ "οΏ½ Analyze Recording",
645
+ variant="primary",
646
+ size="lg"
647
+ )
648
+
649
+ with gr.Row():
650
+ with gr.Column(scale=1):
651
+ recording_summary_output = gr.HTML(
652
+ label="Assessment Summary",
653
+ value="<p>Record a video and click 'Analyze Recording' to see the summary.</p>"
654
+ )
655
+
656
+ with gr.Column(scale=1):
657
+ recording_audio_output = gr.Audio(
658
+ label="πŸ”Š Audio Feedback",
659
+ visible=True
660
+ )
661
+
662
+ recording_assessment_output = gr.Textbox(
663
+ label="Detailed Assessment Report",
664
+ lines=15,
665
+ max_lines=25,
666
+ placeholder="Record a video and analyze it to see detailed CICE 2.0 assessment..."
667
+ )
668
+
669
+ with gr.Tab("πŸ“ Upload Video & Assess"):
670
+ gr.Markdown("### Upload Existing Healthcare Team Video")
671
+
672
+ with gr.Row():
673
+ with gr.Column(scale=1):
674
+ video_input = gr.Video(
675
+ label="Upload Healthcare Team Video",
676
+ height=400
677
+ )
678
+
679
+ assess_btn = gr.Button(
680
+ "πŸ” Analyze Video",
681
+ variant="primary",
682
+ size="lg"
683
+ )
684
+
685
+ with gr.Column(scale=2):
686
+ summary_output = gr.HTML(
687
+ label="Assessment Summary",
688
+ value="<p>Upload a video and click 'Analyze Video' to see the summary.</p>"
689
+ )
690
+
691
+ with gr.Row():
692
+ assessment_output = gr.Textbox(
693
+ label="Detailed Assessment Report",
694
+ lines=20,
695
+ max_lines=30,
696
+ placeholder="Detailed CICE 2.0 assessment will appear here..."
697
+ )
698
+
699
+ with gr.Row():
700
+ audio_output = gr.Audio(
701
+ label="πŸ”Š Audio Feedback",
702
+ visible=True
703
+ )
704
+
705
+ with gr.Tab("πŸ’¬ Q&A About Assessment"):
706
+ gr.Markdown("Ask specific questions about the assessment results from either recorded or uploaded videos.")
707
+
708
+ with gr.Row():
709
+ question_input = gr.Textbox(
710
+ label="Your Question",
711
+ placeholder="e.g., 'Was active listening demonstrated?' or 'How did the team handle conflicts?'",
712
+ lines=2
713
+ )
714
+ ask_btn = gr.Button("Ask Question", variant="secondary")
715
+
716
+ qa_output = gr.Textbox(
717
+ label="Answer",
718
+ lines=10,
719
+ placeholder="Answers to your questions will appear here..."
720
+ )
721
+
722
+ gr.Markdown("""
723
+ **Example Questions:**
724
+ - Was active listening demonstrated?
725
+ - How did the team handle conflicts?
726
+ - What improvements are recommended?
727
+ - Which competencies were most strongly observed?
728
+ - What specific evidence was found for role verbalization?
729
+ """)
730
+
731
+ with gr.Tab("πŸ“‹ About CICE 2.0"):
732
+ gr.Markdown("""
733
+ ## About the CICE 2.0 Assessment Framework
734
+
735
+ The Collaborative Interprofessional Competencies Evaluation (CICE) 2.0 is a validated instrument for assessing interprofessional collaboration competencies in healthcare teams.
736
+
737
+ ### The 18 Core Competencies:
738
+
739
+ **Values/Ethics:**
740
+ 1. Health Status Factors
741
+ 2. Team Goals Identification
742
+ 3. Goal Prioritization
743
+ 4. Role Verbalization
744
+ 5. Seeking Guidance
745
+ 6. Cost-Effective Communication
746
+
747
+ **Roles/Responsibilities:**
748
+ 7. Expertise-Based Questions
749
+ 8. Avoiding Jargon
750
+ 9. Explaining Terminology
751
+ 10. Clear Role Communication
752
+
753
+ **Interprofessional Communication:**
754
+ 11. Active Listening
755
+ 12. Soliciting Perspectives
756
+ 13. Recognizing Contributions
757
+ 14. Team Respect
758
+ 15. Conflict Resolution
759
+
760
+ **Teams and Teamwork:**
761
+ 16. Strength Reflection
762
+ 17. Challenge Reflection
763
+ 18. Improvement Identification
764
+
765
+ ### Performance Levels:
766
+ - **Exemplary** (85%+): Exceptional interprofessional collaboration
767
+ - **Proficient** (70-84%): Good collaboration with minor areas for improvement
768
+ - **Developing** (50-69%): Adequate collaboration with several improvement opportunities
769
+ - **Needs Improvement** (<50%): Significant development required
770
+
771
+ ### Recording Tips:
772
+ - Ensure good lighting and clear audio
773
+ - Position camera to capture all team members
774
+ - Allow natural team interactions
775
+ - Include patient case discussions
776
+ - Record for at least 5-10 minutes for comprehensive assessment
777
+ """)
778
+
779
+ # Event handlers for recorded videos
780
+ analyze_recording_btn.click(
781
+ fn=lambda complete, data: store_assessment_state(
782
+ None,
783
+ *process_recorded_video(complete, data),
784
+ video_data=data,
785
+ is_recorded=True
786
+ ),
787
+ inputs=[recording_complete, recorded_video_data],
788
+ outputs=[recording_assessment_output, recording_summary_output, recording_audio_output]
789
+ )
790
+
791
+ # Event handlers for uploaded videos
792
+ assess_btn.click(
793
+ fn=lambda video: store_assessment_state(video, *process_video_assessment(video)),
794
+ inputs=[video_input],
795
+ outputs=[assessment_output, summary_output, audio_output]
796
+ )
797
+
798
+ # Q&A handlers
799
+ ask_btn.click(
800
+ fn=qa_wrapper,
801
+ inputs=[question_input],
802
+ outputs=[qa_output]
803
+ )
804
+
805
+ # Enter key support for questions
806
+ question_input.submit(
807
+ fn=qa_wrapper,
808
+ inputs=[question_input],
809
+ outputs=[qa_output]
810
+ )
811
+
812
+ # Auto-detect when recording is complete (JavaScript to Gradio communication)
813
+ recording_complete.change(
814
+ fn=lambda: None, # Just trigger an update
815
+ inputs=[],
816
+ outputs=[]
817
+ )
818
+
819
+ return app
820
+
821
+ if __name__ == "__main__":
822
+ # Create and launch the app
823
+ app = create_gradio_app()
824
+
825
+ # Launch with sharing enabled for deployment
826
+ app.launch(
827
+ server_name="0.0.0.0",
828
+ server_port=7860,
829
+ share=True,
830
+ show_error=True
831
+ )